update posts for torchlight

This commit is contained in:
John Bowdre 2023-11-05 17:38:20 -06:00
parent f6cc404866
commit 36335c3774
11 changed files with 527 additions and 475 deletions

View file

@ -22,62 +22,64 @@ I eventually came across [this blog post](https://www.virtualnebula.com/blog/201
### Preparing the SSH host
I deployed a Windows Server 2019 Core VM to use as my SSH host, and I joined it to my AD domain as `win02.lab.bowdre.net`. Once that's taken care of, I need to install the RSAT DNS tools so that I can use the `Add-DnsServerResourceRecord` and associated cmdlets. I can do that through PowerShell like so:
```powershell
# Install RSAT DNS tools
Add-WindowsCapability -online -name Rsat.Dns.Tools~~~~0.0.1.0
# Install RSAT DNS tools [tl! .nocopy]
Add-WindowsCapability -online -name Rsat.Dns.Tools~~~~0.0.1.0 # [tl! .cmd_pwsh]
```
Instead of using a third-party SSH server, I'll use the OpenSSH Server that's already available in Windows 10 (1809+) and Server 2019:
```powershell
# Install OpenSSH Server
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
# Install OpenSSH Server [tl! .nocopy]
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 # [tl! .cmd_pwsh]
```
I'll also want to set it so that the default shell upon SSH login is PowerShell (rather than the standard Command Prompt) so that I can have easy access to those DNS cmdlets:
```powershell
# Set PowerShell as the default Shell (for access to DNS cmdlets)
New-ItemProperty -Path "HKLM:\SOFTWARE\OpenSSH" -Name DefaultShell -Value "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -PropertyType String -Force
# Set PowerShell as the default Shell (for access to DNS cmdlets) # [tl! .nocopy]
New-ItemProperty -Path "HKLM:\SOFTWARE\OpenSSH" -Name DefaultShell ` # [tl! .cmd_pwsh:2
-Value "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" `
-PropertyType String -Force
```
I'll be using my `lab\vra` service account for managing DNS. I've already given it the appropriate rights on the DNS server, but I'll also add it to the Administrators group on my SSH host:
```powershell
# Add the service account as a local administrator
Add-LocalGroupMember -Group Administrators -Member "lab\vra"
# Add the service account as a local administrator # [tl! .nocopy]
Add-LocalGroupMember -Group Administrators -Member "lab\vra" # [tl! .cmd_pwsh]
```
And I'll modify the OpenSSH configuration so that only members of that Administrators group are permitted to log into the server via SSH:
```powershell
# Restrict SSH access to members in the local Administrators group
(Get-Content "C:\ProgramData\ssh\sshd_config") -Replace "# Authentication:", "$&`nAllowGroups Administrators" | Set-Content "C:\ProgramData\ssh\sshd_config"
# Restrict SSH access to members in the local Administrators group [tl! .nocopy]
(Get-Content "C:\ProgramData\ssh\sshd_config") -Replace "# Authentication:", `
"$&`nAllowGroups Administrators" | Set-Content "C:\ProgramData\ssh\sshd_config" # [tl! .cmd_pwsh:-1]
```
Finally, I'll start the `sshd` service and set it to start up automatically:
```powershell
# Start service and set it to automatic
Set-Service -Name sshd -StartupType Automatic -Status Running
# Start service and set it to automatic [tl! .nocopy]
Set-Service -Name sshd -StartupType Automatic -Status Running # [tl! .cmd_pwsh]
```
#### A quick test
At this point, I can log in to the server via SSH and confirm that I can create and delete records in my DNS zone:
```powershell
ssh vra@win02.lab.bowdre.net
vra@win02.lab.bowdre.net's password:
ssh vra@win02.lab.bowdre.net # [tl! .cmd_pwsh]
vra@win02.lab.bowdre.net`'s password: # [tl! .nocopy:3]
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
PS C:\Users\vra> Add-DnsServerResourceRecordA -ComputerName win01.lab.bowdre.net -Name testy -ZoneName lab.bowdre.net -AllowUpdateAny -IPv4Address 172.16.99.99
PS C:\Users\vra> nslookup testy
Server: win01.lab.bowdre.net
Add-DnsServerResourceRecordA -ComputerName win01.lab.bowdre.net `
-Name testy -ZoneName lab.bowdre.net -AllowUpdateAny -IPv4Address 172.16.99.99 # [tl! .cmd_pwsh:-1]
nslookup testy # [tl! .cmd_pwsh]
Server: win01.lab.bowdre.net # [tl! .nocopy:start]
Address: 192.168.1.5
Name: testy.lab.bowdre.net
Address: 172.16.99.99
PS C:\Users\vra> Remove-DnsServerResourceRecord -ComputerName win01.lab.bowdre.net -Name testy -ZoneName lab.bowdre.net -RRType A -Force
PS C:\Users\vra> nslookup testy
Server: win01.lab.bowdre.net
# [tl! .nocopy:end]
Remove-DnsServerResourceRecord -ComputerName win01.lab.bowdre.net `
-Name testy -ZoneName lab.bowdre.net -RRType A -Force # [tl! .cmd_pwsh:-1]
nslookup testy # [tl! .cmd_pwsh]
Server: win01.lab.bowdre.net # [tl! .nocopy:3]
Address: 192.168.1.5
*** win01.lab.bowdre.net can't find testy: Non-existent domain
@ -111,23 +113,24 @@ resources:
```
So here's the complete cloud template that I've been working on:
```yaml {linenos=true}
formatVersion: 1
```yaml
# torchlight! {"lineNumbers": true}
formatVersion: 1 # [tl! focus:1]
inputs:
site:
site: # [tl! collapse:5]
type: string
title: Site
enum:
- BOW
- DRE
image:
image: # [tl! collapse:6]
type: string
title: Operating System
oneOf:
- title: Windows Server 2019
const: ws2019
default: ws2019
size:
size: # [tl! collapse:10]
title: Resource Size
type: string
oneOf:
@ -138,18 +141,18 @@ inputs:
- title: 'Small [2vCPU|2GB]'
const: small
default: small
network:
network: # [tl! collapse:2]
title: Network
type: string
adJoin:
adJoin: # [tl! collapse:3]
title: Join to AD domain
type: boolean
default: true
staticDns:
staticDns: # [tl! highlight:3 focus:3]
title: Create static DNS record
type: boolean
default: false
environment:
environment: # [tl! collapse:10]
type: string
title: Environment
oneOf:
@ -160,7 +163,7 @@ inputs:
- title: Production
const: P
default: D
function:
function: # [tl! collapse:14]
type: string
title: Function Code
oneOf:
@ -175,34 +178,34 @@ inputs:
- title: Testing (TST)
const: TST
default: TST
app:
app: # [tl! collapse:5]
type: string
title: Application Code
minLength: 3
maxLength: 3
default: xxx
description:
description: # [tl! collapse:4]
type: string
title: Description
description: Server function/purpose
default: Testing and evaluation
poc_name:
poc_name: # [tl! collapse:3]
type: string
title: Point of Contact Name
default: Jack Shephard
poc_email:
poc_email: # [tl! collapse:4]
type: string
title: Point of Contact Email
default: jack.shephard@virtuallypotato.com
default: username@example.com
pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$'
ticket:
ticket: # [tl! collapse:3]
type: string
title: Ticket/Request Number
default: 4815162342
resources:
resources: # [tl! focus:3]
Cloud_vSphere_Machine_1:
type: Cloud.vSphere.Machine
properties:
properties: # [tl! collapse:start]
image: '${input.image}'
flavor: '${input.size}'
site: '${input.site}'
@ -212,9 +215,9 @@ resources:
ignoreActiveDirectory: '${!input.adJoin}'
activeDirectory:
relativeDN: '${"OU=Servers,OU=Computers,OU=" + input.site + ",OU=LAB"}'
customizationSpec: '${input.adJoin ? "vra-win-domain" : "vra-win-workgroup"}'
staticDns: '${input.staticDns}'
dnsDomain: lab.bowdre.net
customizationSpec: '${input.adJoin ? "vra-win-domain" : "vra-win-workgroup"}' # [tl! collapse:end]
staticDns: '${input.staticDns}' # [tl! focus highlight]
dnsDomain: lab.bowdre.net # [tl! collapse:start]
poc: '${input.poc_name + " (" + input.poc_email + ")"}'
ticket: '${input.ticket}'
description: '${input.description}'
@ -222,10 +225,10 @@ resources:
- network: '${resource.Cloud_vSphere_Network_1.id}'
assignment: static
constraints:
- tag: 'comp:${to_lower(input.site)}'
- tag: 'comp:${to_lower(input.site)}' # [tl! collapse:end]
Cloud_vSphere_Network_1:
type: Cloud.vSphere.Network
properties:
properties: # [tl! collapse:3]
networkType: existing
constraints:
- tag: 'net:${input.network}'
@ -280,9 +283,12 @@ Now we're ready for the good part: inserting a new scriptable task into the work
![Task inputs](20210809_task_inputs.png)
And here's the JavaScript for the task:
```js {linenos=true}
```javascript
// torchlight! {"lineNumbers": true}
// JavaScript: Create DNS Record task
// Inputs: inputProperties (Properties), dnsServers (Array/string), sshHost (string), sshUser (string), sshPass (secureString), supportedDomains (Array/string)
// Inputs: inputProperties (Properties), dnsServers (Array/string),
// sshHost (string), sshUser (string), sshPass (secureString),
// supportedDomains (Array/string)
// Outputs: None
var staticDns = inputProperties.customProperties.staticDns;
@ -341,9 +347,12 @@ The schema will include a single scriptable task:
And it's going to be *pretty damn similar* to the other one:
```js {linenos=true}
```javascript
// torchlight! {"lineNumbers": true}
// JavaScript: Delete DNS Record task
// Inputs: inputProperties (Properties), dnsServers (Array/string), sshHost (string), sshUser (string), sshPass (secureString), supportedDomains (Array/string)
// Inputs: inputProperties (Properties), dnsServers (Array/string),
// sshHost (string), sshUser (string), sshPass (secureString),
// supportedDomains (Array/string)
// Outputs: None
var staticDns = inputProperties.customProperties.staticDns;
@ -396,9 +405,9 @@ Once the deployment completes, I go back into vRO, find the most recent item in
![Workflow success!](20210813_workflow_success.png)
And I can run a quick query to make sure that name actually resolves:
```command-session
dig +short bow-ttst-xxx023.lab.bowdre.net A
172.16.30.10
```shell
dig +short bow-ttst-xxx023.lab.bowdre.net A # [tl! .cmd]
172.16.30.10 # [tl! .nocopy]
```
It works!
@ -410,8 +419,8 @@ Again, I'll check the **Workflow Runs** in vRO to see that the deprovisioning ta
![VM Deprovisioning workflow](20210813_workflow_deletion.png)
And I can `dig` a little more to make sure the name doesn't resolve anymore:
```command-session
dig +short bow-ttst-xxx023.lab.bowdre.net A
```shell
dig +short bow-ttst-xxx023.lab.bowdre.net A # [tl! .cmd]
```

View file

@ -19,8 +19,8 @@ Here's how.
#### Step Zero: Prereqs
You'll need Windows 10 1903 build 18362 or newer (on x64). You can check by running `ver` from a Command Prompt:
```powershell
C:\> ver
Microsoft Windows [Version 10.0.18363.1082]
ver # [tl! .cmd_pwsh]
Microsoft Windows [Version 10.0.18363.1082] # [tl! .nocopy]
```
We're interested in that third set of numbers. 18363 is bigger than 18362 so we're good to go!
@ -28,13 +28,13 @@ We're interested in that third set of numbers. 18363 is bigger than 18362 so we'
*(Not needed if you've already been using WSL1.)*
You can do this by dropping the following into an elevated Powershell prompt:
```powershell
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart # [tl! .cmd_pwsh]
```
#### Step Two: Enable the Virtual Machine Platform feature
Drop this in an elevated Powershell:
```powershell
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart # [tl! .cmd_pwsh]
```
And then reboot (this is still Windows, after all).
@ -44,7 +44,7 @@ Download it from [here](https://wslstorestorage.blob.core.windows.net/wslblob/ws
#### Step Four: Set WSL2 as your default
Open a Powershell window and run:
```powershell
wsl --set-default-version 2
wsl --set-default-version 2 # [tl! .cmd_pwsh]
```
#### Step Five: Install a Linux distro, or upgrade an existing one
@ -52,14 +52,14 @@ If you're brand new to this WSL thing, head over to the [Microsoft Store](https:
If you've already got a WSL1 distro installed, first run `wsl -l -v` in Powershell to make sure you know the distro name:
```powershell
PS C:\Users\jbowdre> wsl -l -v
NAME STATE VERSION
wsl -l -v # [tl! .cmd_pwsh]
NAME STATE VERSION # [tl! .nocopy:1]
* Debian Running 2
```
And then upgrade the distro to WSL2 with `wsl --set-version <distro_name> 2`:
```powershell
PS C:\Users\jbowdre> wsl --set-version Debian 2
Conversion in progress, this may take a few minutes...
PS C:\Users\jbowdre> wsl --set-version Debian 2 # [tl! .cmd_pwsh]
Conversion in progress, this may take a few minutes... # [tl! .nocopy]
```
Cool!

View file

@ -42,12 +42,13 @@ I'm going to use the [Docker setup](https://docs.ntfy.sh/install/#docker) on a s
#### Ntfy in Docker
So I'll start by creating a new directory at `/opt/ntfy/` to hold the goods, and create a compose config.
```command
sudo mkdir -p /opt/ntfy
```shell
sudo mkdir -p /opt/ntfy # [tl! .cmd:1]
sudo vim /opt/ntfy/docker-compose.yml
```
```yaml {linenos=true}
```yaml
# torchlight! {"lineNumbers": true}
# /opt/ntfy/docker-compose.yml
version: "2.3"
@ -81,21 +82,22 @@ This config will create/mount folders in the working directory to store the ntfy
I can go ahead and bring it up:
```command-session
sudo docker-compose up -d
Creating network "ntfy_default" with the default driver
```shell
sudo docker-compose up -d # [tl! focus:start .cmd]
Creating network "ntfy_default" with the default driver # [tl! .nocopy:start]
Pulling ntfy (binwiederhier/ntfy:)...
latest: Pulling from binwiederhier/ntfy
latest: Pulling from binwiederhier/ntfy # [tl! focus:end]
7264a8db6415: Pull complete
1ac6a3b2d03b: Pull complete
Digest: sha256:da08556da89a3f7317557fd39cf302c6e4691b4f8ce3a68aa7be86c4141e11c8
Status: Downloaded newer image for binwiederhier/ntfy:latest
Creating ntfy ... done
Status: Downloaded newer image for binwiederhier/ntfy:latest # [tl! focus:1]
Creating ntfy ... done # [tl! .nocopy:end]
```
#### Caddy Reverse Proxy
I'll also want to add [the following](https://docs.ntfy.sh/config/#nginxapache2caddy) to my Caddy config:
```caddyfile {linenos=true}
```text
# torchlight! {"lineNumbers": true}
# /etc/caddy/Caddyfile
ntfy.runtimeterror.dev, http://ntfy.runtimeterror.dev {
reverse_proxy localhost:2586
@ -112,8 +114,8 @@ ntfy.runtimeterror.dev, http://ntfy.runtimeterror.dev {
```
And I'll restart Caddy to apply the config:
```command
sudo systemctl restart caddy
```shell
sudo systemctl restart caddy # [tl! .cmd]
```
Now I can point my browser to `https://ntfy.runtimeterror.dev` and see the web interface:
@ -124,9 +126,9 @@ I can subscribe to a new topic:
![Subscribing to a public topic](subscribe_public_topic.png)
And publish a message to it:
```command-session
curl -d "Hi" https://ntfy.runtimeterror.dev/testy
{"id":"80bUl6cKwgBP","time":1694981305,"expires":1695024505,"event":"message","topic":"testy","message":"Hi"}
```curl
curl -d "Hi" https://ntfy.runtimeterror.dev/testy # [tl! .cmd]
{"id":"80bUl6cKwgBP","time":1694981305,"expires":1695024505,"event":"message","topic":"testy","message":"Hi"} # [tl! .nocopy]
```
Which will then show up as a notification in my browser:
@ -138,6 +140,7 @@ So now I've got my own ntfy server, and I've verified that it works for unauthen
I'll start by creating a `server.yml` config file which will be mounted into the container. This config will specify where to store the user database and switch the default ACL to `deny-all`:
```yaml
# torchlight! {"lineNumbers": true}
# /opt/ntfy/etc/ntfy/server.yml
auth-file: "/var/lib/ntfy/user.db"
auth-default-access: "deny-all"
@ -145,8 +148,8 @@ base-url: "https://ntfy.runtimeterror.dev"
```
I can then restart the container, and try again to subscribe to the same (or any other topic):
```command
sudo docker-compose down && sudo docker-compose up -d
```shell
sudo docker-compose down && sudo docker-compose up -d # [tl! .cmd]
```
@ -154,36 +157,34 @@ Now I get prompted to log in:
![Login prompt](login_required.png)
I'll need to use the ntfy CLI to create/manage entries in the user DB, and that means first grabbing a shell inside the container:
```command
sudo docker exec -it ntfy /bin/sh
```shell
sudo docker exec -it ntfy /bin/sh # [tl! .cmd]
```
For now, I'm going to create three users: one as an administrator, one as a "writer", and one as a "reader". I'll be prompted for a password for each:
```command-session
ntfy user add --role=admin administrator
user administrator added with role admin
```
```command-session
ntfy user add writer
user writer added with role user
```
```command-session
ntfy user add reader
user reader added with role user
```shell
ntfy user add --role=admin administrator # [tl! .cmd]
user administrator added with role admin # [tl! .nocopy:1]
ntfy user add writer # [tl! .cmd]
user writer added with role user # [tl! .nocopy:1]
ntfy user add reader # [tl! .cmd]
user reader added with role user # [tl! .nocopy]
```
The admin user has global read+write access, but right now the other two can't do anything. Let's make it so that `writer` can write to all topics, and `reader` can read from all topics:
```command
ntfy access writer '*' write
```shell
ntfy access writer '*' write # [tl! .cmd:1]
ntfy access reader '*' read
```
I could lock these down further by selecting specific topic names instead of `'*'` but this will do fine for now.
Let's go ahead and verify the access as well:
```command-session
ntfy access
user administrator (role: admin, tier: none)
```shell
ntfy access # [tl! .cmd]
user administrator (role: admin, tier: none) # [tl! .nocopy:8]
- read-write access to all topics (admin role)
user reader (role: user, tier: none)
- read-only access to topic *
@ -195,17 +196,17 @@ user * (role: anonymous, tier: none)
```
While I'm at it, I also want to configure an access token to be used with the `writer` account. I'll be able to use that instead of username+password when publishing messages.
```command-session
ntfy token add writer
token tk_mm8o6cwxmox11wrnh8miehtivxk7m created for user writer, never expires
```shell
ntfy token add writer # [tl! .cmd]
token tk_mm8o6cwxmox11wrnh8miehtivxk7m created for user writer, never expires # [tl! .nocopy]
```
I can go back to the web, subscribe to the `testy` topic again using the `reader` credentials, and then test sending an authenticated notification with `curl`:
```command-session
curl -H "Authorization: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m" \
```curl
curl -H "Authorization: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m" \ # [tl! .cmd]
-d "Once more, with auth!" \
https://ntfy.runtimeterror.dev/testy
{"id":"0dmX9emtehHe","time":1694987274,"expires":1695030474,"event":"message","topic":"testy","message":"Once more, with auth!"}
{"id":"0dmX9emtehHe","time":1694987274,"expires":1695030474,"event":"message","topic":"testy","message":"Once more, with auth!"} # [tl! .nocopy]
```
![Authenticated notification](authenticated_notification.png)
@ -222,6 +223,7 @@ I may want to wind up having servers notify for a variety of conditions so I'll
`/usr/local/bin/ntfy_push.sh`:
```shell
# torchlight! {"lineNumbers": true}
#!/usr/bin/env bash
curl \
@ -234,8 +236,8 @@ curl \
Note that I'm using a new topic name now: `server_alerts`. Topics are automatically created when messages are posted to them. I just need to make sure to subscribe to the topic in the web UI (or mobile app) so that I can receive these notifications.
Okay, now let's make it executable and then give it a quick test:
```command
chmod +x /usr/local/bin/ntfy_push.sh
```shell
chmod +x /usr/local/bin/ntfy_push.sh # [tl! .cmd:1]
/usr/local/bin/ntfy_push.sh "Script Test" "This is a test from the magic script I just wrote."
```
@ -246,6 +248,7 @@ I don't know an easy way to tell a systemd service definition to pass arguments
`/usr/local/bin/ntfy_boot_complete.sh`:
```shell
# torchlight! {"lineNumbers": true}
#!/usr/bin/env bash
TITLE="$(hostname -s)"
@ -255,14 +258,15 @@ MESSAGE="System boot complete"
```
And this one should be executable as well:
```command
chmod +x /usr/local/bin/ntfy_boot_complete.sh
```shell
chmod +x /usr/local/bin/ntfy_boot_complete.sh # [tl! .cmd]
```
##### Service Definition
Finally I can create and register the service definition so that the script will run at each system boot.
`/etc/systemd/system/ntfy_boot_complete.service`:
```cfg
```ini
# torchlight! {"lineNumbers": true}
[Unit]
After=network.target
@ -273,8 +277,8 @@ ExecStart=/usr/local/bin/ntfy_boot_complete.sh
WantedBy=default.target
```
```command
sudo systemctl daemon-reload
```shell
sudo systemctl daemon-reload # [tl! .cmd:1]
sudo systemctl enable --now ntfy_boot_complete.service
```
@ -292,7 +296,8 @@ Enabling ntfy as a notification handler is pretty straight-forward, and it will
##### Notify Configuration
I'll add ntfy to Home Assistant by using the [RESTful Notifications](https://www.home-assistant.io/integrations/notify.rest/) integration. For that, I just need to update my instance's `configuration.yaml` to configure the connection.
```yaml {linenos=true}
```yaml
# torchlight! {"lineNumbers": true}
# configuration.yaml
notify:
- name: ntfy
@ -309,6 +314,7 @@ notify:
The `Authorization` line references a secret stored in `secrets.yaml`:
```yaml
# torchlight! {"lineNumbers": true}
# secrets.yaml
ntfy_token: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m
```
@ -327,6 +333,7 @@ I'll use the Home Assistant UI to push a notification through ntfy if any of my
The business end of this is the service call at the end:
```yaml
# torchlight! {"lineNumbers": true}
service: notify.ntfy
data:
title: Leak detected!

View file

@ -51,14 +51,14 @@ Running `tanzu completion --help` will tell you what's needed, and you can just
```
So to get the completions to load automatically whenever you start a `bash` shell, run:
```command
tanzu completion bash > $HOME/.tanzu/completion.bash.inc
```shell
tanzu completion bash > $HOME/.tanzu/completion.bash.inc # [tl! .cmd:1]
printf "\n# Tanzu shell completion\nsource '$HOME/.tanzu/completion.bash.inc'\n" >> $HOME/.bash_profile
```
For a `zsh` shell, it's:
```command
echo "autoload -U compinit; compinit" >> ~/.zshrc
```shell
echo "autoload -U compinit; compinit" >> ~/.zshrc # [tl! .cmd:1]
tanzu completion zsh > "${fpath[1]}/_tanzu"
```

View file

@ -85,8 +85,8 @@ Let's start with the gear (hardware and software) I needed to make this work:
The very first task is to write the required firmware image (download [here](https://github.com/jaredmcneill/quartz64_uefi/releases)) to a micro SD card. I used a 64GB card that I had lying around but you could easily get by with a *much* smaller one; the firmware image is tiny, and the card can't be used for storing anything else. Since I'm doing this on a Chromebook, I'll be using the [Chromebook Recovery Utility (CRU)](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm) for writing the images to external storage as described [in another post](/burn-an-iso-to-usb-with-the-chromebook-recovery-utility/).
After downloading [`QUARTZ64_EFI.img.gz`](https://github.com/jaredmcneill/quartz64_uefi/releases/download/2022-07-20/QUARTZ64_EFI.img.gz), I need to get it into a format recognized by CRU and, in this case, that means extracting the gzipped archive and then compressing the `.img` file into a standard `.zip`:
```command
gunzip QUARTZ64_EFI.img.gz
```shell
gunzip QUARTZ64_EFI.img.gz # [tl! .cmd:1]
zip QUARTZ64_EFI.img.zip QUARTZ64_EFI.img
```
@ -98,8 +98,8 @@ I can then write it to the micro SD card by opening CRU, clicking on the gear ic
I'll also need to prepare the ESXi installation media (download [here](https://customerconnect.vmware.com/downloads/get-download?downloadGroup=ESXI-ARM)). For that, I'll be using a 256GB USB drive. Due to the limited storage options on the Quartz64, I'll be installing ESXi onto the same drive I use to boot the installer so, in this case, the more storage the better. By default, ESXi 7.0 will consume up to 128GB for the new `ESX-OSData` partition; whatever is leftover will be made available as a VMFS datastore. That could be problematic given the unavailable/flaky USB support of the Quartz64. (While you *can* install ESXi onto a smaller drive, down to about ~20GB, the lack of additional storage on this hardware makes it pretty important to take advantage of as much space as you can.)
In any case, to make the downloaded `VMware-VMvisor-Installer-7.0-20133114.aarch64.iso` writeable with CRU all I need to do is add `.bin` to the end of the filename:
```command
mv VMware-VMvisor-Installer-7.0-20133114.aarch64.iso{,.bin}
```shell
mv VMware-VMvisor-Installer-7.0-20133114.aarch64.iso{,.bin} # [tl! .cmd]
```
Then it's time to write the image onto the USB drive:
@ -201,13 +201,13 @@ As I mentioned earlier, my initial goal is to deploy a Tailscale node on my new
#### Deploying Photon OS
VMware provides Photon in a few different formats, as described on the [download page](https://github.com/vmware/photon/wiki/Downloading-Photon-OS). I'm going to use the "OVA with virtual hardware v13 arm64" version so I'll kick off that download of `photon_uefi.ova`. I'm actually going to download that file straight to my `deb01` Linux VM:
```command
wget https://packages.vmware.com/photon/4.0/Rev2/ova/photon_uefi.ova
```shell
wget https://packages.vmware.com/photon/4.0/Rev2/ova/photon_uefi.ova # [tl! .cmd]
```
and then spawn a quick Python web server to share it out:
```command-session
python3 -m http.server
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
```shell
python3 -m http.server # [tl! .cmd]
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ... # [tl! .nocopy]
```
That will let me deploy from a resource already inside my lab network instead of transferring the OVA from my laptop. So now I can go back to my vSphere Client and go through the steps to **Deploy OVF Template** to the new host, and I'll plug in the URL `http://deb01.lab.bowdre.net:8000/photon_uefi.ova`:
@ -232,13 +232,13 @@ The default password for Photon's `root` user is `changeme`. You'll be forced to
![First login, and the requisite password change](first_login.png)
Now that I'm in, I'll set the hostname appropriately:
```commandroot
hostnamectl set-hostname pho01
```shell
hostnamectl set-hostname pho01 # [tl! .cmd_root]
```
For now, the VM pulled an IP from DHCP but I would like to configure that statically instead. To do that, I'll create a new interface file:
```commandroot-session
cat > /etc/systemd/network/10-static-en.network << "EOF"
```shell
cat > /etc/systemd/network/10-static-en.network << "EOF" # [tl! .cmd_root]
[Match]
Name = eth0
@ -251,33 +251,31 @@ DHCP = no
IPForward = yes
EOF
```
```commandroot
chmod 644 /etc/systemd/network/10-static-en.network
chmod 644 /etc/systemd/network/10-static-en.network # [tl! .cmd_root:1]
systemctl restart systemd-networkd
```
I'm including `IPForward = yes` to [enable IP forwarding](https://tailscale.com/kb/1104/enable-ip-forwarding/) for Tailscale.
With networking sorted, it's probably a good idea to check for and apply any available updates:
```commandroot
tdnf update -y
```shell
tdnf update -y # [tl! .cmd_root]
```
I'll also go ahead and create a normal user account (with sudo privileges) for me to use:
```commandroot
useradd -G wheel -m john
```shell
useradd -G wheel -m john # [tl! .cmd_root:1]
passwd john
```
Now I can use SSH to connect to the VM and ditch the web console:
```command-session
ssh pho01.lab.bowdre.net
Password:
```
```command-session
sudo whoami
```shell
ssh pho01.lab.bowdre.net # [tl! .cmd]
Password: # [tl! .nocopy]
sudo whoami # [tl! .cmd]
# [tl! .nocopy:start]
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
@ -286,7 +284,7 @@ Administrator. It usually boils down to these three things:
#3) With great power comes great responsibility.
[sudo] password for john
root
root # [tl! .nocopy:end]
```
Looking good! I'll now move on to the justification[^justification] for this entire exercise:
@ -295,45 +293,42 @@ Looking good! I'll now move on to the justification[^justification] for this ent
#### Installing Tailscale
If I *weren't* doing this on hard mode, I could use Tailscale's [install script](https://tailscale.com/download) like I do on every other Linux system. Hard mode is what I do though, and the installer doesn't directly support Photon OS. I'll instead consult the [manual install instructions](https://tailscale.com/download/linux/static) which tell me to download the appropriate binaries from [https://pkgs.tailscale.com/stable/#static](https://pkgs.tailscale.com/stable/#static). So I'll grab the link for the latest `arm64` build and pull the down to the VM:
```command
curl https://pkgs.tailscale.com/stable/tailscale_1.22.2_arm64.tgz --output tailscale_arm64.tgz
```shell
curl https://pkgs.tailscale.com/stable/tailscale_1.22.2_arm64.tgz --output tailscale_arm64.tgz # [tl! .cmd]
```
Then I can unpack it:
```command
sudo tdnf install tar
```shell
sudo tdnf install tar # [tl! .cmd:2]
tar xvf tailscale_arm64.tgz
cd tailscale_1.22.2_arm64/
```
So I've got the `tailscale` and `tailscaled` binaries as well as some sample service configs in the `systemd` directory:
```command-session
ls
total 32288
```shell
ls # [tl! .cmd]
total 32288 # [tl! .nocopy:4]
drwxr-x--- 2 john users 4096 Mar 18 02:44 systemd
-rwxr-x--- 1 john users 12187139 Mar 18 02:44 tailscale
-rwxr-x--- 1 john users 20866538 Mar 18 02:44 tailscaled
```
```command-session
ls ./systemd
total 8
ls ./systemd # [tl! .cmd]
total 8 # [tl! .nocopy:2]
-rw-r----- 1 john users 287 Mar 18 02:44 tailscaled.defaults
-rw-r----- 1 john users 674 Mar 18 02:44 tailscaled.service
```
Dealing with the binaries is straight-forward. I'll drop them into `/usr/bin/` and `/usr/sbin/` (respectively) and set the file permissions:
```command
sudo install -m 755 tailscale /usr/bin/
```shell
sudo install -m 755 tailscale /usr/bin/ # [tl! .cmd:1]
sudo install -m 755 tailscaled /usr/sbin/
```
Then I'll descend to the `systemd` folder and see what's up:
```command
cd systemd/
```
```command-session
```shell
cd systemd/ # [tl! .cmd:1]
cat tailscaled.defaults
# Set the port to listen on for incoming VPN packets.
# Set the port to listen on for incoming VPN packets. [tl! .nocopy:8]
# Remote nodes will automatically be informed about the new port number,
# but you might want to configure this in order to set external firewall
# settings.
@ -341,10 +336,9 @@ PORT="41641"
# Extra flags you might want to pass to tailscaled.
FLAGS=""
```
```command-session
cat tailscaled.service
[Unit]
cat tailscaled.service # [tl! .cmd]
[Unit] # [tl! .nocopy:start]
Description=Tailscale node agent
Documentation=https://tailscale.com/kb/
Wants=network-pre.target
@ -367,28 +361,28 @@ CacheDirectoryMode=0750
Type=notify
[Install]
WantedBy=multi-user.target
WantedBy=multi-user.target # [tl! .nocopy:end]
```
`tailscaled.defaults` contains the default configuration that will be referenced by the service, and `tailscaled.service` tells me that it expects to find it at `/etc/defaults/tailscaled`. So I'll copy it there and set the perms:
```command
sudo install -m 644 tailscaled.defaults /etc/defaults/tailscaled
```shell
sudo install -m 644 tailscaled.defaults /etc/defaults/tailscaled # [tl! .cmd]
```
`tailscaled.service` will get dropped in `/usr/lib/systemd/system/`:
```command
sudo install -m 644 tailscaled.service /usr/lib/systemd/system/
```shell
sudo install -m 644 tailscaled.service /usr/lib/systemd/system/ # [tl! .cmd]
```
Then I'll enable the service and start it:
```command
sudo systemctl enable tailscaled.service
```shell
sudo systemctl enable tailscaled.service # [tl! .cmd:1]
sudo systemctl start tailscaled.service
```
And finally log in to Tailscale, including my `tag:home` tag for [ACL purposes](/secure-networking-made-simple-with-tailscale/#acls) and a route advertisement for my home network so that my other Tailscale nodes can use this one to access other devices as well:
```command
sudo tailscale up --advertise-tags "tag:home" --advertise-route "192.168.1.0/24"
```shell
sudo tailscale up --advertise-tags "tag:home" --advertise-route "192.168.1.0/24" # [tl! .cmd]
```
That will return a URL I can use to authenticate, and I'll then able to to view and manage the new Tailscale node from the `login.tailscale.com` admin portal:

View file

@ -74,9 +74,9 @@ Success! My new ingress rules appear at the bottom of the list.
![New rules added](s5Y0rycng.png)
That gets traffic from the internet and to my instance, but the OS is still going to drop the traffic at its own firewall. I'll need to work with `iptables` to change that. (You typically use `ufw` to manage firewalls more easily on Ubuntu, but it isn't included on this minimal image and seemed to butt heads with `iptables` when I tried adding it. I eventually decided it was better to just interact with `iptables` directly). I'll start by listing the existing rules on the `INPUT` chain:
```command-session
sudo iptables -L INPUT --line-numbers
Chain INPUT (policy ACCEPT)
```shell
sudo iptables -L INPUT --line-numbers # [tl! .cmd]
Chain INPUT (policy ACCEPT) # [tl! .nocopy:7]
num target prot opt source destination
1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
2 ACCEPT icmp -- anywhere anywhere
@ -87,15 +87,15 @@ num target prot opt source destination
```
Note the `REJECT all` statement at line `6`. I'll need to insert my new `ACCEPT` rules for ports `80` and `443` above that implicit deny all:
```command
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT
```shell
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT # [tl! .cmd:1]
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT
```
And then I'll confirm that the order is correct:
```command-session
sudo iptables -L INPUT --line-numbers
Chain INPUT (policy ACCEPT)
```shell
sudo iptables -L INPUT --line-numbers # [tl! .cmd]
Chain INPUT (policy ACCEPT) # [tl! .nocopy:9]
num target prot opt source destination
1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
2 ACCEPT icmp -- anywhere anywhere
@ -108,9 +108,9 @@ num target prot opt source destination
```
I can use `nmap` running from my local Linux environment to confirm that I can now reach those ports on the VM. (They're still "closed" since nothing is listening on the ports yet, but the connections aren't being rejected.)
```command-session
nmap -Pn matrix.bowdre.net
Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 12:49 CDT
```shell
nmap -Pn matrix.bowdre.net # [tl! .cmd]
Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 12:49 CDT # [tl! .nocopy:10]
Nmap scan report for matrix.bowdre.net(150.136.6.180)
Host is up (0.086s latency).
Other addresses for matrix.bowdre.net (not scanned): 2607:7700:0:1d:0:1:9688:6b4
@ -125,16 +125,16 @@ Nmap done: 1 IP address (1 host up) scanned in 8.44 seconds
Cool! Before I move on, I'll be sure to make the rules persistent so they'll be re-applied whenever `iptables` starts up:
Make rules persistent:
```command-session
sudo netfilter-persistent save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
```shell
sudo netfilter-persistent save # [tl! .cmd]
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save # [tl! .nocopy:1]
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save
```
### Reverse proxy setup
I had initially planned on using `certbot` to generate Let's Encrypt certificates, and then reference the certs as needed from an `nginx` or Apache reverse proxy configuration. While researching how the [proxy would need to be configured to front Synapse](https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md), I found this sample `nginx` configuration:
```nginx {linenos=true}
```text
# torchlight! {"lineNumbers": true}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
@ -159,7 +159,8 @@ server {
```
And this sample Apache one:
```apache {linenos=true}
```text
# torchlight! {"lineNumbers": true}
<VirtualHost *:443>
SSLEngine on
ServerName matrix.example.com
@ -185,7 +186,8 @@ And this sample Apache one:
```
I also found this sample config for another web server called [Caddy](https://caddyserver.com):
```caddy {linenos=true}
```text
# torchlight! {"lineNumbers": true}
matrix.example.com {
reverse_proxy /_matrix/* http://localhost:8008
reverse_proxy /_synapse/client/* http://localhost:8008
@ -198,8 +200,8 @@ example.com:8448 {
One of these looks much simpler than the other two. I'd never heard of Caddy so I did some quick digging, and I found that it would actually [handle the certificates entirely automatically](https://caddyserver.com/docs/automatic-https) - in addition to having a much easier config. [Installing Caddy](https://caddyserver.com/docs/install#debian-ubuntu-raspbian) wasn't too bad, either:
```command
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
```shell
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https # [tl! .cmd:4]
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo apt-key add -
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
@ -207,7 +209,8 @@ sudo apt install caddy
```
Then I just need to put my configuration into the default `Caddyfile`, including the required `.well-known` delegation piece from earlier.
```caddy {linenos=true}
```text
# torchlight! {"lineNumbers": true}
# /etc/caddy/Caddyfile
matrix.bowdre.net {
reverse_proxy /_matrix/* http://localhost:8008
@ -228,16 +231,16 @@ I set up the `bowdre.net` section to return the appropriate JSON string to tell
(I wouldn't need that section at all if I were using a separate web server for `bowdre.net`; instead, I'd basically just add that `respond /.well-known/matrix/server` line to that other server's config.)
Now to enable the `caddy` service, start it, and restart it so that it loads the new config:
```command
sudo systemctl enable caddy
```shell
sudo systemctl enable caddy # [tl! .cmd:2]
sudo systemctl start caddy
sudo systemctl restart caddy
```
If I repeat my `nmap` scan from earlier, I'll see that the HTTP and HTTPS ports are now open. The server still isn't actually serving anything on those ports yet, but at least it's listening.
```command-session
nmap -Pn matrix.bowdre.net
Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 13:44 CDT
```shell
nmap -Pn matrix.bowdre.net # [tl! .cmd]
Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 13:44 CDT # [tl! .nocopy:9]
Nmap scan report for matrix.bowdre.net (150.136.6.180)
Host is up (0.034s latency).
Not shown: 997 filtered ports
@ -265,57 +268,56 @@ Okay, let's actually serve something up now.
#### Docker setup
Before I can get on with [deploying Synapse in Docker](https://hub.docker.com/r/matrixdotorg/synapse), I first need to [install Docker](https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository) on the system:
```command-session
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
```
```command
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
```
```command-session
echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
```
```command
sudo apt update
```shell
sudo apt-get install \ # [tl! .cmd]
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ # [tl! .cmd]
sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \ # [tl! .cmd]
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update # [tl! .cmd:1]
sudo apt install docker-ce docker-ce-cli containerd.io
```
I'll also [install Docker Compose](https://docs.docker.com/compose/install/#install-compose):
```command
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
```shell
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" \ # [tl! .cmd]
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose # [tl! .cmd]
```
And I'll add my `ubuntu` user to the `docker` group so that I won't have to run every docker command with `sudo`:
```command
sudo usermod -G docker -a ubuntu
```shell
sudo usermod -G docker -a ubuntu # [tl! .cmd]
```
I'll log out and back in so that the membership change takes effect, and then test both `docker` and `docker-compose` to make sure they're working:
```command-session
docker --version
Docker version 20.10.7, build f0df350
```
```command-session
docker-compose --version
docker-compose version 1.29.2, build 5becea4c
```shell
docker --version # [tl! .cmd]
Docker version 20.10.7, build f0df350 # [tl! .nocopy:1]
docker-compose --version # [tl! .cmd]
docker-compose version 1.29.2, build 5becea4c # [tl! .nocopy]
```
#### Synapse setup
Now I'll make a place for the Synapse installation to live, including a `data` folder that will be mounted into the container:
```command
sudo mkdir -p /opt/matrix/synapse/data
```shell
sudo mkdir -p /opt/matrix/synapse/data # [tl! .cmd:1]
cd /opt/matrix/synapse
```
And then I'll create the compose file to define the deployment:
```yaml {linenos=true}
```yaml
# torchlight! {"lineNumbers": true}
# /opt/matrix/synapse/docker-compose.yaml
services:
synapse:
@ -330,13 +332,13 @@ services:
Before I can fire this up, I'll need to generate an initial configuration as [described in the documentation](https://hub.docker.com/r/matrixdotorg/synapse). Here I'll specify the server name that I'd like other Matrix servers to know mine by (`bowdre.net`):
```command-session
docker run -it --rm \
```shell
docker run -it --rm \ # [tl! .cmd]
-v "/opt/matrix/synapse/data:/data" \
-e SYNAPSE_SERVER_NAME=bowdre.net \
-e SYNAPSE_REPORT_STATS=yes \
matrixdotorg/synapse generate
# [tl! .nocopy:start]
Unable to find image 'matrixdotorg/synapse:latest' locally
latest: Pulling from matrixdotorg/synapse
69692152171a: Pull complete
@ -353,7 +355,7 @@ Status: Downloaded newer image for matrixdotorg/synapse:latest
Creating log config /data/bowdre.net.log.config
Generating config file /data/homeserver.yaml
Generating signing key file /data/bowdre.net.signing.key
A config file has been generated in '/data/homeserver.yaml' for server name 'bowdre.net'. Please review this file and customise it to your needs.
A config file has been generated in '/data/homeserver.yaml' for server name 'bowdre.net'. Please review this file and customise it to your needs. # [tl! .nocopy:end]
```
As instructed, I'll use `sudo vi data/homeserver.yaml` to review/modify the generated config. I'll leave
@ -375,16 +377,16 @@ so that I can create a user account without fumbling with the CLI. I'll be sure
There are a bunch of other useful configurations that can be made here, but these will do to get things going for now.
Time to start it up:
```command-session
docker-compose up -d
Creating network "synapse_default" with the default driver
```shell
docker-compose up -d # [tl! .cmd]
Creating network "synapse_default" with the default driver # [tl! .nocopy:1]
Creating synapse ... done
```
And use `docker ps` to confirm that it's running:
```command-session
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```shell
docker ps # [tl! .cmd]
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES # [tl! .nocopy:1]
573612ec5735 matrixdotorg/synapse "/start.py" 25 seconds ago Up 23 seconds (healthy) 8009/tcp, 127.0.0.1:8008->8008/tcp, 8448/tcp synapse
```
@ -417,21 +419,21 @@ All in, I'm pretty pleased with how this little project turned out, and I learne
### Update: Updating
After a while, it's probably a good idea to update both the Ubntu server and the Synapse container running on it. Updating the server itself is as easy as:
```command
sudo apt update
```shell
sudo apt update # [tl! .cmd:1]
sudo apt upgrade
```
Here's what I do to update the container:
```bash
# Move to the working directory
cd /opt/matrix/synapse
# Pull a new version of the synapse image
docker-compose pull
# Stop the container
docker-compose down
# Start it back up without the old version
docker-compose up -d --remove-orphans
# Periodically remove the old docker images
docker image prune
```shell
# Move to the working directory # [tl! .nocopy]
cd /opt/matrix/synapse # [tl! .cmd]
# Pull a new version of the synapse image # [tl! .nocopy]
docker-compose pull # [tl! .cmd]
# Stop the container # [tl! .nocopy]
docker-compose down # [tl! .cmd]
# Start it back up without the old version # [tl! .nocopy]
docker-compose up -d --remove-orphans # [tl! .cmd]
# Periodically remove the old docker images # [tl! .nocopy]
docker image prune # [tl! .cmd]
```

View file

@ -14,40 +14,41 @@ I found myself with a sudden need for parsing a Linux server's logs to figure ou
### Find IP-ish strings
This will get you all occurrences of things which look vaguely like IPv4 addresses:
```command
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT
```shell
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT # [tl! .cmd]
```
(It's not a perfect IP address regex since it would match things like `987.654.321.555` but it's close enough for my needs.)
### Filter out `localhost`
The log likely include a LOT of traffic to/from `127.0.0.1` so let's toss out `localhost` by piping through `grep -v "127.0.0.1"` (`-v` will do an inverse match - only return results which *don't* match the given expression):
```command
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1"
```shell
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" # [tl! .cmd]
```
### Count up the duplicates
Now we need to know how many times each IP shows up in the log. We can do that by passing the output through `uniq -c` (`uniq` will filter for unique entries, and the `-c` flag will return a count of how many times each result appears):
```command
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c
```shell
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c # [tl! .cmd]
```
### Sort the results
We can use `sort` to sort the results. `-n` tells it sort based on numeric rather than character values, and `-r` reverses the list so that the larger numbers appear at the top:
```command
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r
```shell
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r # [tl! .cmd]
```
### Top 5
And, finally, let's use `head -n 5` to only get the first five results:
```command
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5
```shell
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5 # [tl! .cmd]
```
### Bonus round!
You know how old log files get rotated and compressed into files like `logname.1.gz`? I *very* recently learned that there are versions of the standard Linux text manipulation tools which can work directly on compressed log files, without having to first extract the files. I'd been doing things the hard way for years - no longer, now that I know about `zcat`, `zdiff`, `zgrep`, and `zless`!
So let's use a `for` loop to iterate through 20 of those compressed logs, and use `date -r [filename]` to get the timestamp for each log as we go:
```command
for i in {1..20}; do date -r ACCESS_LOG.$i.gz; zgrep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' \ACCESS_LOG.log.$i.gz | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5; done
```shell
for i in {1..20}; do date -r ACCESS_LOG.$i.gz; zgrep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' \ # [tl! .cmd]
ACCESS_LOG.log.$i.gz | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5; done
```
Nice!

View file

@ -39,9 +39,9 @@ ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verif
```
Further, attempting to pull down that URL with `curl` also failed:
```commandroot-session
curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery
curl: (60) SSL certificate problem: self signed certificate in certificate chain
```shell
curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery # [tl! .cmd]
curl: (60) SSL certificate problem: self signed certificate in certificate chain # [tl! .nocopy:5]
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
@ -61,20 +61,21 @@ So here's what I did to get things working in my homelab:
![Exporting the self-signed CA cert](20211105_export_selfsigned_ca.png)
2. Open the file in a text editor, and copy the contents into a new file on the SSC appliance. I used `~/vra.crt`.
3. Append the certificate to the end of the system `ca-bundle.crt`:
```commandroot
cat <vra.crt >> /etc/pki/tls/certs/ca-bundle.crt
```shell
cat <vra.crt >> /etc/pki/tls/certs/ca-bundle.crt # [tl! .cmd]
```
4. Test that I can now `curl` from vRA without a certificate error:
```commandroot-session
curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery
{"timestamp":1636139143260,"type":"CLIENT_ERROR","status":"400 BAD_REQUEST","error":"Bad Request","serverMessage":"400 BAD_REQUEST \"Required String parameter 'state' is not present\""}
```curl
curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery # [tl! .cmd]
{"timestamp":1636139143260,"type":"CLIENT_ERROR","status":"400 BAD_REQUEST","error":"Bad Request","serverMessage":"400 BAD_REQUEST \"Required String parameter 'state' is not present\""} # [tl! .nocopy]
```
5. Edit `/usr/lib/systemd/system/raas.service` to update the service definition so it will look to the `ca-bundle.crt` file by adding
```cfg
```ini
Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
```
above the `ExecStart` line:
```cfg {linenos=true,hl_lines=16}
```ini
# torchlight! {"lineNumbers": true}
# /usr/lib/systemd/system/raas.service
[Unit]
Description=The SaltStack Enterprise API Server
@ -90,15 +91,15 @@ RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX AF_NETLINK
PermissionsStartOnly=true
ExecStartPre=/bin/sh -c 'systemctl set-environment FIPS_MODE=$(/opt/vmware/bin/ovfenv -q --key fips-mode)'
ExecStartPre=/bin/sh -c 'systemctl set-environment NODE_TYPE=$(/opt/vmware/bin/ovfenv -q --key node-type)'
Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt # [tl! focus]
ExecStart=/usr/bin/raas
TimeoutStopSec=90
[Install]
WantedBy=multi-user.target
```
6. Stop and restart the `raas` service:
```command
systemctl daemon-reload
```shell
systemctl daemon-reload # [tl! .cmd:2]
systemctl stop raas
systemctl start raas
```
@ -110,8 +111,8 @@ systemctl start raas
The steps for doing this at work with an enterprise CA were pretty similar, with just slightly-different steps 1 and 2:
1. Access the enterprise CA and download the CA chain, which came in `.p7b` format.
2. Use `openssl` to extract the individual certificates:
```command
openssl pkcs7 -inform PEM -outform PEM -in enterprise-ca-chain.p7b -print_certs > enterprise-ca-chain.pem
```shell
openssl pkcs7 -inform PEM -outform PEM -in enterprise-ca-chain.p7b -print_certs > enterprise-ca-chain.pem # [tl! .cmd]
```
Copy it to the SSC appliance, and then pick up with Step 3 above.

View file

@ -44,8 +44,8 @@ After hitting **Execute**, the Swagger UI will populate the *Responses* section
![curl request format](login_controller_3.png)
So I could easily replicate this using the `curl` utility by just copying and pasting the following into a shell:
```command-session
curl -X 'POST' \
```curl
curl -X 'POST' \ # [tl! .cmd]
'https://vra.lab.bowdre.net/csp/gateway/am/api/login' \
-H 'accept: */*' \
-H 'Content-Type: application/json' \
@ -69,31 +69,32 @@ Now I can go find an IaaS API that I'm interested in querying (like `/iaas/api/f
![Using Swagger to query for flavor mappings](flavor_mappings_swagger_request.png)
And here's the result:
```json {hl_lines=[6,10,14,44,48,52,56,60,64]}
```json
// torchlight! {"lineNumbers": true}
{
"content": [
{
"flavorMappings": {
"mapping": {
"1vCPU | 2GB [tiny]": {
"1vCPU | 2GB [tiny]": { // [tl! focus]
"cpuCount": 1,
"memoryInMB": 2048
},
"1vCPU | 1GB [micro]": {
"1vCPU | 1GB [micro]": { // [tl! focus]
"cpuCount": 1,
"memoryInMB": 1024
},
"2vCPU | 4GB [small]": {
"2vCPU | 4GB [small]": { // [tl! focus]
"cpuCount": 2,
"memoryInMB": 4096
}
},
}, // [tl! collapse:5]
"_links": {
"region": {
"href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f"
}
}
},
},// [tl! collapse:start]
"externalRegionId": "Datacenter:datacenter-39056",
"cloudAccountId": "75d29635-f128-4b85-8cf9-95a9e5981c68",
"name": "",
@ -107,43 +108,43 @@ And here's the result:
},
"region": {
"href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f"
}
} // [tl! collapse:end]
}
},
{
"flavorMappings": {
"mapping": {
"2vCPU | 8GB [medium]": {
"2vCPU | 8GB [medium]": { // [tl! focus]
"cpuCount": 2,
"memoryInMB": 8192
},
"1vCPU | 2GB [tiny]": {
"1vCPU | 2GB [tiny]": { // [tl! focus]
"cpuCount": 1,
"memoryInMB": 2048
},
"8vCPU | 16GB [giant]": {
"8vCPU | 16GB [giant]": { // [tl! focus]
"cpuCount": 8,
"memoryInMB": 16384
},
"1vCPU | 1GB [micro]": {
"1vCPU | 1GB [micro]": { // [tl! focus]
"cpuCount": 1,
"memoryInMB": 1024
},
"2vCPU | 4GB [small]": {
"2vCPU | 4GB [small]": { // [tl! focus]
"cpuCount": 2,
"memoryInMB": 4096
},
"4vCPU | 12GB [large]": {
"4vCPU | 12GB [large]": { // [tl! focus]
"cpuCount": 4,
"memoryInMB": 12288
}
},
}, // [tl! collapse:5]
"_links": {
"region": {
"href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136"
}
}
},
}, // [tl! collapse:start]
"externalRegionId": "Datacenter:datacenter-1001",
"cloudAccountId": "75d29635-f128-4b85-8cf9-95a9e5981c68",
"name": "",
@ -158,7 +159,7 @@ And here's the result:
"region": {
"href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136"
}
}
} // [tl! collapse:end]
}
],
"totalElements": 2,
@ -175,61 +176,62 @@ As you can see, Swagger can really help to jump-start the exploration of a new A
[HTTPie](https://httpie.io/) is a handy command-line utility optimized for interacting with web APIs. This will make things easier as I dig deeper.
Installing the [Debian package](https://httpie.io/docs/cli/debian-and-ubuntu) is a piece of ~~cake~~ _pie_[^pie]:
```command
curl -SsL https://packages.httpie.io/deb/KEY.gpg | sudo apt-key add -
```shell
curl -SsL https://packages.httpie.io/deb/KEY.gpg | sudo apt-key add - # [tl! .cmd:3]
sudo curl -SsL -o /etc/apt/sources.list.d/httpie.list https://packages.httpie.io/deb/httpie.list
sudo apt update
sudo apt install httpie
```
Once installed, running `http` will give me a quick overview of how to use this new tool:
```command-session
http
usage:
```shell
http # [tl! .cmd]
usage: # [tl! .nocopy:start]
http [METHOD] URL [REQUEST_ITEM ...]
error:
the following arguments are required: URL
for more information:
run 'http --help' or visit https://httpie.io/docs/cli
run 'http --help' or visit https://httpie.io/docs/cli # [tl! .nocopy:end]
```
HTTPie cleverly interprets anything passed after the URL as a [request item](https://httpie.io/docs/cli/request-items), and it determines the item type based on a simple key/value syntax:
> Each request item is simply a key/value pair separated with the following characters: `:` (headers), `=` (data field, e.g., JSON, form), `:=` (raw data field), `==` (query parameters), `@` (file upload).
So my earlier request for an authentication token becomes:
```command
https POST vra.lab.bowdre.net/csp/gateway/am/api/login username='vra' password='********' domain='lab.bowdre.net'
```shell
https POST vra.lab.bowdre.net/csp/gateway/am/api/login username='vra' password='********' domain='lab.bowdre.net' # [tl! .cmd]
```
{{% notice tip "Working with Self-Signed Certificates" %}}
If your vRA endpoint is using a self-signed or otherwise untrusted certificate, pass the HTTPie option `--verify=no` to ignore certificate errors:
```command
https --verify=no POST [URL] [REQUEST_ITEMS]
```shell
https --verify=no POST [URL] [REQUEST_ITEMS] # [tl! .cmd]
```
{{% /notice %}}
Running that will return a bunch of interesting headers but I'm mainly interested in the response body:
```json
{
"cspAuthToken": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlhLFNUPVNvZmlhLEM9QkciLCJpYXQiOjE2NTQwMjQw[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-ErjC6F3c2mV1qIqES2oZbEpjxar16ZVSPshIaOoWRXe5uZB21tkuwVMgZuuwgmpliG_JBa1Y6Oh0FZBbI7o0ERro9qOW-s2npz4Csv5FwcXt0fa4esbXXIKINjqZMh9NDDb23bUabSag"
"cspAuthToken": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlh[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-npz4Csv5FwcXt0fa"
}
```
There's the auth token[^token] that I'll need for subsequent requests. I'll store that in a variable so that it's easier to wield:
```command
token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlhLFNUPVNvZmlhLEM9QkciLCJpYXQiOjE2NTQwMjQw[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-ErjC6F3c2mV1qIqES2oZbEpjxar16ZVSPshIaOoWRXe5uZB21tkuwVMgZuuwgmpliG_JBa1Y6Oh0FZBbI7o0ERro9qOW-s2npz4Csv5FwcXt0fa4esbXXIKINjqZMh9NDDb23bUabSag
```shell
token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlh[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-npz4Csv5FwcXt0fa # [tl! .cmd]
```
So now if I want to find out which images have been configured in vRA, I can ask:
```command
https GET vra.lab.bowdre.net/iaas/api/images "Authorization: Bearer $token"
```shell
https GET vra.lab.bowdre.net/iaas/api/images "Authorization: Bearer $token" # [tl! .cmd]
```
{{% notice note "Request Items" %}}
Remember from above that HTTPie will automatically insert key/value pairs separated by a colon into the request header.
{{% /notice %}}
And I'll get back some headers followed by an JSON object detailing the defined image mappings broken up by region:
```json {linenos=true,hl_lines=[11,14,37,40,53,56]}
```json
// torchlight! {"lineNumbers": true}
{
"content": [
{
@ -240,10 +242,10 @@ And I'll get back some headers followed by an JSON object detailing the defined
},
"externalRegionId": "Datacenter:datacenter-39056",
"mapping": {
"Photon 4": {
"Photon 4": { // [tl! focus]
"_links": {
"region": {
"href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f"
"href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f" // [tl! focus]
}
},
"cloudConfig": "",
@ -266,10 +268,10 @@ And I'll get back some headers followed by an JSON object detailing the defined
},
"externalRegionId": "Datacenter:datacenter-1001",
"mapping": {
"Photon 4": {
"Photon 4": { // [tl! focus]
"_links": {
"region": {
"href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136"
"href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" // [tl! focus]
}
},
"cloudConfig": "",
@ -282,10 +284,10 @@ And I'll get back some headers followed by an JSON object detailing the defined
"name": "photon",
"osFamily": "LINUX"
},
"Windows Server 2019": {
"Windows Server 2019": { // [tl! focus]
"_links": {
"region": {
"href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136"
"href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" // [tl! focus]
}
},
"cloudConfig": "",
@ -376,7 +378,8 @@ I'll head into **Library > Actions** to create a new action inside my `com.virtu
| `configurationName` | `string` | Name of Configuration |
| `variableName` | `string` | Name of desired variable inside Configuration |
```javascript {linenos=true}
```javascript
// torchlight! {"lineNumbers": true}
/*
JavaScript: getConfigValue action
Inputs: path (string), configurationName (string), variableName (string)
@ -396,7 +399,8 @@ Next, I'll create another action in my `com.virtuallypotato.utility` module whic
![vraLogin action](vraLogin_action.png)
```javascript {linenos=true}
```javascript
// torchlight! {"lineNumbers": true}
/*
JavaScript: vraLogin action
Inputs: none
@ -428,7 +432,8 @@ I like to clean up after myself so I'm also going to create a `vraLogout` action
|:--- |:--- |:--- |
| `token` | `string` | Auth token of the session to destroy |
```javascript {linenos=true}
```javascript
// torchlight! {"lineNumbers": true}
/*
JavaScript: vraLogout action
Inputs: token (string)
@ -458,7 +463,8 @@ My final "utility" action for this effort will run in between `vraLogin` and `vr
|`uri`|`string`|Path to API controller (`/iaas/api/flavor-profiles`)|
|`content`|`string`|Any additional data to pass with the request|
```javascript {linenos=true}
```javascript
// torchlight! {"lineNumbers": true}
/*
JavaScript: vraExecute action
Inputs: token (string), method (string), uri (string), content (string)
@ -496,7 +502,8 @@ This action will:
Other actions wanting to interact with the vRA REST API will follow the same basic formula, though with some more logic and capability baked in.
Anyway, here's my first swing:
```JavaScript {linenos=true}
```javascript
// torchlight! {"lineNumbers": true}
/*
JavaScript: vraTester action
Inputs: none
@ -513,7 +520,8 @@ Pretty simple, right? Let's see if it works:
![vraTester action](vraTester_action.png)
It did! Though that result is a bit hard to parse visually, so I'm going to prettify it a bit:
```json {linenos=true,hl_lines=[17,35,56,74]}
```json
// torchlight! {"lineNumbers": true}
[
{
"tags": [],
@ -530,7 +538,7 @@ It did! Though that result is a bit hard to parse visually, so I'm going to pret
"folder": "vRA_Deploy",
"externalRegionId": "Datacenter:datacenter-1001",
"cloudAccountId": "75d29635-f128-4b85-8cf9-95a9e5981c68",
"name": "NUC",
"name": "NUC", // [tl! focus]
"id": "3d4f048a-385d-4759-8c04-117a170d060c",
"updatedAt": "2022-06-02",
"organizationId": "61ebe5bf-5f55-4dee-8533-7ad05c067dd9",
@ -548,7 +556,7 @@ It did! Though that result is a bit hard to parse visually, so I'm going to pret
"href": "/iaas/api/zones/3d4f048a-385d-4759-8c04-117a170d060c"
},
"region": {
"href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136"
"href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" // [tl! focus]
},
"cloud-account": {
"href": "/iaas/api/cloud-accounts/75d29635-f128-4b85-8cf9-95a9e5981c68"
@ -569,7 +577,7 @@ It did! Though that result is a bit hard to parse visually, so I'm going to pret
},
"externalRegionId": "Datacenter:datacenter-39056",
"cloudAccountId": "75d29635-f128-4b85-8cf9-95a9e5981c68",
"name": "QTZ",
"name": "QTZ", // [tl! focus]
"id": "84470591-74a2-4659-87fd-e5d174a679a2",
"updatedAt": "2022-06-02",
"organizationId": "61ebe5bf-5f55-4dee-8533-7ad05c067dd9",
@ -587,7 +595,7 @@ It did! Though that result is a bit hard to parse visually, so I'm going to pret
"href": "/iaas/api/zones/84470591-74a2-4659-87fd-e5d174a679a2"
},
"region": {
"href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f"
"href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f" // [tl! focus]
},
"cloud-account": {
"href": "/iaas/api/cloud-accounts/75d29635-f128-4b85-8cf9-95a9e5981c68"
@ -609,7 +617,8 @@ This action will basically just repeat the call that I tested above in `vraTeste
![vraGetZones action](vraGetZones_action.png)
```javascript {linenos=true}
```javascript
// torchlight! {"lineNumbers": true}
/*
JavaScript: vraGetZones action
Inputs: none
@ -639,7 +648,8 @@ Oh, and the whole thing is wrapped in a conditional so that the code only execut
|:--- |:--- |:--- |
| `zoneName` | `string` | The name of the Zone selected in the request form |
```javascript {linenos=true}
```javascript
// torchlight! {"lineNumbers": true}
/* JavaScript: vraGetImages action
Inputs: zoneName (string)
Return type: array/string
@ -708,7 +718,8 @@ Next I'll repeat the same steps to create a new `image` input. This time, though
![Binding the input](image_input.png)
The full code for my template now looks like this:
```yaml {linenos=true}
```yaml
# torchlight! {"lineNumbers": true}
formatVersion: 1
inputs:
zoneName:

View file

@ -50,21 +50,21 @@ I've described the [process of creating a new instance on OCI in a past post](/f
### Prepare the server
Once the server's up and running, I go through the usual steps of applying any available updates:
```command
sudo apt update
```shell
sudo apt update # [tl! .cmd:1]
sudo apt upgrade
```
#### Install Tailscale
And then I'll install Tailscale using their handy-dandy bootstrap script:
```command
curl -fsSL https://tailscale.com/install.sh | sh
```shell
curl -fsSL https://tailscale.com/install.sh | sh # [tl! .cmd]
```
When I bring up the Tailscale interface, I'll use the `--advertise-tags` flag to identify the server with an [ACL tag](https://tailscale.com/kb/1068/acl-tags/). ([Within my tailnet](/secure-networking-made-simple-with-tailscale/#acls)[^tailnet], all of my other clients are able to connect to devices bearing the `cloud` tag but `cloud` servers can only reach back to other devices for performing DNS lookups.)
```command
sudo tailscale up --advertise-tags "tag:cloud"
```shell
sudo tailscale up --advertise-tags "tag:cloud" # [tl! .cmd]
```
[^tailnet]: [Tailscale's term](https://tailscale.com/kb/1136/tailnet/) for the private network which securely links Tailscale-connected devices.
@ -72,26 +72,22 @@ sudo tailscale up --advertise-tags "tag:cloud"
#### Install Docker
Next I install Docker and `docker-compose`:
```command
sudo apt install ca-certificates curl gnupg lsb-release
```shell
sudo apt install ca-certificates curl gnupg lsb-release # [tl! .cmd:2]
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
```
```command-session
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
```
```command
sudo apt update
sudo apt update # [tl! .cmd:1]
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose docker-compose-plugin
```
#### Configure firewall
This server automatically had an iptables firewall rule configured to permit SSH access. For Gitea, I'll also need to configure HTTP/HTTPS access. [As before](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#firewall-configuration), I need to be mindful of the explicit `REJECT all` rule at the bottom of the `INPUT` chain:
```command-session
sudo iptables -L INPUT --line-numbers
Chain INPUT (policy ACCEPT)
```shell
sudo iptables -L INPUT --line-numbers # [tl! .cmd]
Chain INPUT (policy ACCEPT) # [tl! .nocopy:8]
num target prot opt source destination
1 ts-input all -- anywhere anywhere
2 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
@ -103,32 +99,31 @@ num target prot opt source destination
```
So I'll insert the new rules at line 6:
```command
sudo iptables -L INPUT --line-numbers
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT
```shell
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT # [tl! .cmd:1]
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT
```
And confirm that it did what I wanted it to:
```command-session
sudo iptables -L INPUT --line-numbers
Chain INPUT (policy ACCEPT)
```shell
sudo iptables -L INPUT --line-numbers # [tl! focus .cmd]
Chain INPUT (policy ACCEPT) # [tl! .nocopy:10]
num target prot opt source destination
1 ts-input all -- anywhere anywhere
2 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
3 ACCEPT icmp -- anywhere anywhere
4 ACCEPT all -- anywhere anywhere
5 ACCEPT udp -- anywhere anywhere udp spt:ntp
6 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https
6 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https # [tl! focus:1]
7 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http
8 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
9 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
```
That looks good, so let's save the new rules:
```command-session
sudo netfilter-persistent save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
```shell
sudo netfilter-persistent save # [tl! .cmd]
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save # [tl! .nocopy:1]
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save
```
@ -143,19 +138,19 @@ I'm now ready to move on with installing Gitea itself.
I'll start with creating a `git` user. This account will be set as the owner of the data volume used by the Gitea container, but will also (perhaps more importantly) facilitate [SSH passthrough](https://docs.gitea.io/en-us/install-with-docker/#ssh-container-passthrough) into the container for secure git operations.
Here's where I create the account and also generate what will become the SSH key used by the git server:
```command
sudo useradd -s /bin/bash -m git
```shell
sudo useradd -s /bin/bash -m git # [tl! .cmd:1]
sudo -u git ssh-keygen -t ecdsa -C "Gitea Host Key"
```
The `git` user's SSH public key gets added as-is directly to that user's `authorized_keys` file:
```command
sudo -u git cat /home/git/.ssh/id_ecdsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys
```shell
sudo -u git cat /home/git/.ssh/id_ecdsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys # [tl! .cmd:1]
sudo -u git chmod 600 /home/git/.ssh/authorized_keys
```
When other users add their SSH public keys into Gitea's web UI, those will get added to `authorized_keys` with a little something extra: an alternate command to perform git actions instead of just SSH ones:
```cfg
```text
command="/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-1",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty <user pubkey>
```
@ -164,14 +159,13 @@ No users have added their keys to Gitea just yet so if you look at `/home/git/.s
{{% /notice %}}
So I'll go ahead and create that extra command:
```command-session
cat <<"EOF" | sudo tee /usr/local/bin/gitea
```shell
cat <<"EOF" | sudo tee /usr/local/bin/gitea # [tl! .cmd]
#!/bin/sh
ssh -p 2222 -o StrictHostKeyChecking=no git@127.0.0.1 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"
EOF
```
```command
sudo chmod +x /usr/local/bin/gitea
sudo chmod +x /usr/local/bin/gitea # [tl! .cmd]
```
So when I use a `git` command to interact with the server via SSH, the commands will get relayed into the Docker container on port 2222.
@ -180,26 +174,27 @@ So when I use a `git` command to interact with the server via SSH, the commands
That takes care of most of the prep work, so now I'm ready to create the `docker-compose.yaml` file which will tell Docker how to host Gitea.
I'm going to place this in `/opt/gitea`:
```command
sudo mkdir -p /opt/gitea
```shell
sudo mkdir -p /opt/gitea # [tl! .cmd:1]
cd /opt/gitea
```
And I want to be sure that my new `git` user owns the `./data` directory which will be where the git contents get stored:
```command
sudo mkdir data
```shell
sudo mkdir data # [tl! .cmd:1]
sudo chown git:git -R data
```
Now to create the file:
```command
sudo vi docker-compose.yaml
```shell
sudo vi docker-compose.yaml # [tl! .cmd]
```
The basic contents of the file came from the [Gitea documentation for Installation with Docker](https://docs.gitea.io/en-us/install-with-docker/), but I also included some (highlighted) additional environment variables based on the [Configuration Cheat Sheet](https://docs.gitea.io/en-us/config-cheat-sheet/):
`docker-compose.yaml`:
```yaml {linenos=true,hl_lines=["12-13","19-31",38,43]}
# torchlight! {"lineNumbers": true}
version: "3"
networks:
@ -211,14 +206,14 @@ services:
image: gitea/gitea:latest
container_name: gitea
environment:
- USER_UID=1003
- USER_UID=1003 # [tl! highlight:1]
- USER_GID=1003
- GITEA__database__DB_TYPE=postgres
- GITEA__database__HOST=db:5432
- GITEA__database__NAME=gitea
- GITEA__database__USER=gitea
- GITEA__database__PASSWD=gitea
- GITEA____APP_NAME=Gitea
- GITEA____APP_NAME=Gitea # [tl! highlight:start]
- GITEA__log__MODE=file
- GITEA__openid__ENABLE_OPENID_SIGNIN=false
- GITEA__other__SHOW_FOOTER_VERSION=false
@ -230,19 +225,19 @@ services:
- GITEA__server__LANDING_PAGE=explore
- GITEA__service__DISABLE_REGISTRATION=true
- GITEA__service_0X2E_explore__DISABLE_USERS_PAGE=true
- GITEA__ui__DEFAULT_THEME=arc-green
- GITEA__ui__DEFAULT_THEME=arc-green # [tl! highlight:end]
restart: always
networks:
- gitea
volumes:
- ./data:/data
- /home/git/.ssh/:/data/git/.ssh
- /home/git/.ssh/:/data/git/.ssh # [tl! highlight]
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "127.0.0.1:2222:22"
- "127.0.0.1:2222:22" # [tl! highlight]
depends_on:
- db
@ -285,21 +280,22 @@ Let's go through the extra configs in a bit more detail:
Beyond the environment variables, I also defined a few additional options to allow the SSH passthrough to function. Mounting the `git` user's SSH config directory into the container will ensure that user keys defined in Gitea will also be reflected outside of the container, and setting the container to listen on local port `2222` will allow it to receive the forwarded SSH connections:
```yaml
volumes:
[...]
- /home/git/.ssh/:/data/git/.ssh
[...]
ports:
[...]
- "127.0.0.1:2222:22"
volumes: # [tl! focus]
- ./data:/data
- /home/git/.ssh/:/data/git/.ssh # [tl! focus]
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports: # [tl! focus]
- "3000:3000"
- "127.0.0.1:2222:22" # [tl! focus]
```
With the config in place, I'm ready to fire it up:
#### Start containers
Starting Gitea is as simple as
```command
sudo docker-compose up -d
```shell
sudo docker-compose up -d # [tl! .cmd]
```
which will spawn both the Gitea server as well as a `postgres` database to back it.
@ -311,8 +307,8 @@ I've [written before](/federated-matrix-server-synapse-on-oracle-clouds-free-tie
#### Install Caddy
So exactly how simple does Caddy make this? Well let's start with installing Caddy on the system:
```command
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
```shell
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https # [tl! .cmd:4]
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
@ -321,14 +317,14 @@ sudo apt install caddy
#### Configure Caddy
Configuring Caddy is as simple as creating a Caddyfile:
```command
sudo vi /etc/caddy/Caddyfile
```shell
sudo vi /etc/caddy/Caddyfile # [tl! .cmd]
```
Within that file, I tell it which fully-qualified domain name(s) I'd like it to respond to (and manage SSL certificates for), as well as that I'd like it to function as a reverse proxy and send the incoming traffic to the same port `3000` that used by the Docker container:
```caddy
```text
git.bowdre.net {
reverse_proxy localhost:3000
reverse_proxy localhost:3000
}
```
@ -336,8 +332,8 @@ That's it. I don't need to worry about headers or ACME configurations or anythin
#### Start Caddy
All that's left at this point is to start up Caddy:
```command
sudo systemctl enable caddy
```shell
sudo systemctl enable caddy # [tl! .cmd:2]
sudo systemctl start caddy
sudo systemctl restart caddy
```
@ -363,14 +359,14 @@ And then I can log out and log back in with my new non-admin identity!
#### Add SSH public key
Associating a public key with my new Gitea account will allow me to easily authenticate my pushes from the command line. I can create a new SSH public/private keypair by following [GitHub's instructions](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent):
```command
ssh-keygen -t ed25519 -C "user@example.com"
```shell
ssh-keygen -t ed25519 -C "user@example.com" # [tl! .cmd]
```
I'll view the contents of the public key - and go ahead and copy the output for future use:
```command-session
cat ~/.ssh/id_ed25519.pub
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5ExSsQfr6pAFBEZ7yx0oljSnpnOixvp8DS26STcx2J user@example.com
```shell
cat ~/.ssh/id_ed25519.pub # [tl! .cmd]
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5ExSsQfr6pAFBEZ7yx0oljSnpnOixvp8DS26STcx2J user@example.com # [tl! .nocopy]
```
Back in the Gitea UI, I'll click the user menu up top and select **Settings**, then the *SSH / GPG Keys* tab, and click the **Add Key** button:
@ -381,9 +377,9 @@ Back in the Gitea UI, I'll click the user menu up top and select **Settings**, t
I can give the key a name and then paste in that public key, and then click the lower **Add Key** button to insert the new key.
To verify that the SSH passthrough magic I [configured earlier](#prepare-git-user) is working, I can take a look at `git`'s `authorized_keys` file:
```command-session
sudo tail -2 /home/git/.ssh/authorized_keys
# gitea public key
```shell
sudo tail -2 /home/git/.ssh/authorized_keys # [tl! .cmd]
# gitea public key [tl! .nocopy:1]
command="/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-3",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,no-user-rc,restrict ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5ExSsQfr6pAFBEZ7yx0oljSnpnOixvp8DS26STcx2J user@example.com
```
@ -395,8 +391,8 @@ I'm already limiting this server's exposure by blocking inbound SSH (except for
[Fail2ban](https://www.fail2ban.org/wiki/index.php/Main_Page) can help with that by monitoring log files for repeated authentication failures and then creating firewall rules to block the offender.
Installing Fail2ban is simple:
```command
sudo apt update
```shell
sudo apt update # [tl! .cmd:1]
sudo apt install fail2ban
```
@ -411,10 +407,11 @@ Specifically, I'll want to watch `log/gitea.log` for messages like the following
```
So let's create that filter:
```command
sudo vi /etc/fail2ban/filter.d/gitea.conf
```shell
sudo vi /etc/fail2ban/filter.d/gitea.conf # [tl! .cmd]
```
```cfg
```ini
# torchlight! {"lineNumbers": true}
# /etc/fail2ban/filter.d/gitea.conf
[Definition]
failregex = .*(Failed authentication attempt|invalid credentials).* from <HOST>
@ -422,10 +419,11 @@ ignoreregex =
```
Next I create the jail, which tells Fail2ban what to do:
```command
sudo vi /etc/fail2ban/jail.d/gitea.conf
```shell
sudo vi /etc/fail2ban/jail.d/gitea.conf # [tl! .cmd]
```
```cfg
```ini
# torchlight! {"lineNumbers": true}
# /etc/fail2ban/jail.d/gitea.conf
[gitea]
enabled = true
@ -440,15 +438,15 @@ action = iptables-allports
This configures Fail2ban to watch the log file (`logpath`) inside the data volume mounted to the Gitea container for messages which match the pattern I just configured (`gitea`). If a system fails to log in 5 times (`maxretry`) within 1 hour (`findtime`, in seconds) then the offending IP will be banned for 1 day (`bantime`, in seconds).
Then I just need to enable and start Fail2ban:
```command
sudo systemctl enable fail2ban
```shell
sudo systemctl enable fail2ban # [tl! .cmd:1]
sudo systemctl start fail2ban
```
To verify that it's working, I can deliberately fail to log in to the web interface and watch `/var/log/fail2ban.log`:
```command-session
sudo tail -f /var/log/fail2ban.log
2022-07-17 21:52:26,978 fail2ban.filter [36042]: INFO [gitea] Found ${MY_HOME_IP}| - 2022-07-17 21:52:26
```shell
sudo tail -f /var/log/fail2ban.log # [tl! .cmd]
2022-07-17 21:52:26,978 fail2ban.filter [36042]: INFO [gitea] Found ${MY_HOME_IP}| - 2022-07-17 21:52:26 # [tl! .nocopy]
```
Excellent, let's now move on to creating some content.
@ -480,8 +478,8 @@ Once it's created, the new-but-empty repository gives me instructions on how I c
![Empty repository](empty_repo.png)
Now I can follow the instructions to initialize my local Obsidian vault (stored at `~/obsidian-vault/`) as a git repository and perform my initial push to Gitea:
```command
cd ~/obsidian-vault/
```shell
cd ~/obsidian-vault/ # [tl! .cmd:5]
git init
git add .
git commit -m "initial commit"

View file

@ -23,13 +23,14 @@ If you'd just like to import a working phpIPAM integration into your environment
Before even worrying about the SDK, I needed to [get a phpIPAM instance ready](https://phpipam.net/documents/installation/). I started with a small (1vCPU/1GB RAM/16GB HDD) VM attached to my "Home" network (`192.168.1.0/24`). I installed Ubuntu 20.04.1 LTS, and then used [this guide](https://computingforgeeks.com/install-and-configure-phpipam-on-ubuntu-debian-linux/) to install phpIPAM.
Once phpIPAM was running and accessible via the web interface, I then used `openssl` to generate a self-signed certificate to be used for the SSL API connection:
```command
sudo mkdir /etc/apache2/certificate
```shell
sudo mkdir /etc/apache2/certificate # [tl! .cmd:2]
cd /etc/apache2/certificate/
sudo openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out apache-certificate.crt -keyout apache.key
```
I edited the apache config file to bind that new certificate on port 443, and to redirect requests on port 80 to port 443:
```apache {linenos=true}
```text
# torchlight! {"lineNumbers": true}
<VirtualHost *:80>
ServerName ipam.lab.bowdre.net
Redirect permanent / https://ipam.lab.bowdre.net
@ -54,7 +55,8 @@ After restarting apache, I verified that hitting `http://ipam.lab.bowdre.net` re
Remember how I've got a "Home" network as well as [several internal networks](/vmware-home-lab-on-intel-nuc-9#networking) which only exist inside the lab environment? I dropped the phpIPAM instance on the Home network to make it easy to connect to, but it doesn't know how to talk to the internal networks where vRA will actually be deploying the VMs. So I added a static route to let it know that traffic to `172.16.0.0/16` would have to go through the Vyos router at `192.168.1.100`.
This is Ubuntu, so I edited `/etc/netplan/99-netcfg-vmware.yaml` to add the `routes` section at the bottom:
```yaml {linenos=true,hl_lines="17-20"}
```yaml
# torchlight! {"lineNumbers": true}
# /etc/netplan/99-netcfg-vmware.yaml
network:
version: 2
@ -71,24 +73,23 @@ network:
- lab.bowdre.net
addresses:
- 192.168.1.5
routes:
routes: # [tl! focus:3]
- to: 172.16.0.0/16
via: 192.168.1.100
metric: 100
```
I then ran `sudo netplan apply` so the change would take immediate effect and confirmed the route was working by pinging the vCenter's interface on the `172.16.10.0/24` network:
```command
sudo netplan apply
```shell
sudo netplan apply # [tl! .cmd]
```
```command-session
ip route
default via 192.168.1.1 dev ens160 proto static
```shell
ip route # [tl! .cmd]
default via 192.168.1.1 dev ens160 proto static # [tl! .nocopy:3]
172.16.0.0/16 via 192.168.1.100 dev ens160 proto static metric 100
192.168.1.0/24 dev ens160 proto kernel scope link src 192.168.1.14
```
```command-session
ping 172.16.10.12
PING 172.16.10.12 (172.16.10.12) 56(84) bytes of data.
ping 172.16.10.12 # [tl! .cmd]
PING 172.16.10.12 (172.16.10.12) 56(84) bytes of data. # [tl! .nocopy:7]
64 bytes from 172.16.10.12: icmp_seq=1 ttl=64 time=0.282 ms
64 bytes from 172.16.10.12: icmp_seq=2 ttl=64 time=0.256 ms
64 bytes from 172.16.10.12: icmp_seq=3 ttl=64 time=0.241 ms
@ -99,7 +100,7 @@ rtt min/avg/max/mdev = 0.241/0.259/0.282/0.016 ms
```
Now would also be a good time to go ahead and enable cron jobs so that phpIPAM will automatically scan its defined subnets for changes in IP availability and device status. phpIPAM includes a pair of scripts in `INSTALL_DIR/functions/scripts/`: one for discovering new hosts, and the other for checking the status of previously discovered hosts. So I ran `sudo crontab -e` to edit root's crontab and pasted in these two lines to call both scripts every 15 minutes:
```cron
```text
*/15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/discoveryCheck.php
*/15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/pingCheck.php
```
@ -205,9 +206,10 @@ Now that I know how to talk to phpIPAM via its RESP API, it's time to figure out
I downloaded the SDK from [here](https://code.vmware.com/web/sdk/1.1.0/vmware-vrealize-automation-third-party-ipam-sdk). It's got a pretty good [README](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/README_VMware.md) which describes the requirements (Java 8+, Maven 3, Python3, Docker, internet access) as well as how to build the package. I also consulted [this white paper](https://docs.vmware.com/en/vRealize-Automation/8.2/ipam_integration_contract_reqs.pdf) which describes the inputs provided by vRA and the outputs expected from the IPAM integration.
The README tells you to extract the .zip and make a simple modification to the `pom.xml` file to "brand" the integration:
```xml {linenos=true,hl_lines="2-4"}
```xml
# torchlight! {"lineNumbers": true}
<properties>
<provider.name>phpIPAM</provider.name>
<provider.name>phpIPAM</provider.name> <!-- [tl! focus:2] -->
<provider.description>phpIPAM integration for vRA</provider.description>
<provider.version>1.0.3</provider.version>
@ -221,7 +223,8 @@ The README tells you to extract the .zip and make a simple modification to the `
You can then kick off the build with `mvn package -PcollectDependencies -Duser.id=${UID}`, which will (eventually) spit out `./target/phpIPAM.zip`. You can then [import the package to vRA](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-410899CA-1B02-4507-96AD-DFE622D2DD47.html) and test it against the `httpbin.org` hostname to validate that the build process works correctly.
You'll notice that the form includes fields for Username, Password, and Hostname; we'll also need to specify the API app ID. This can be done by editing `./src/main/resources/endpoint-schema.json`. I added an `apiAppId` field:
```json {linenos=true,hl_lines=[12,38]}
```json
// torchlight! {"lineNumbers":true}
{
"layout":{
"pages":[
@ -233,7 +236,7 @@ You'll notice that the form includes fields for Username, Password, and Hostname
"id":"section_1",
"fields":[
{
"id":"apiAppId",
"id":"apiAppId", // [tl! focus]
"display":"textField"
},
{
@ -259,7 +262,7 @@ You'll notice that the form includes fields for Username, Password, and Hostname
"type":{
"dataType":"string"
},
"label":"API App ID",
"label":"API App ID", // [tl! focus]
"constraints":{
"required":true
}
@ -321,7 +324,8 @@ Example payload:
```
The `do_validate_endpoint` function has a handy comment letting us know that's where we'll drop in our code:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
def do_validate_endpoint(self, auth_credentials, cert):
# Your implemention goes here
@ -332,7 +336,8 @@ def do_validate_endpoint(self, auth_credentials, cert):
response = requests.get("https://" + self.inputs["endpointProperties"]["hostName"], verify=cert, auth=(username, password))
```
The example code gives us a nice start at how we'll get our inputs from vRA. So let's expand that a bit:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
def do_validate_endpoint(self, auth_credentials, cert):
# Build variables
username = auth_credentials["privateKeyId"]
@ -341,19 +346,22 @@ def do_validate_endpoint(self, auth_credentials, cert):
apiAppId = self.inputs["endpointProperties"]["apiAppId"]
```
As before, we'll construct the "base" URI by inserting the `hostname` and `apiAppId`, and we'll combine the `username` and `password` into our `auth` variable:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
uri = f'https://{hostname}/api/{apiAppId}/
auth = (username, password)
```
I realized that I'd be needing to do the same authentication steps for each one of these operations, so I created a new `auth_session()` function to do the heavy lifting. Other operations will also need to return the authorization token but for this run we really just need to know whether the authentication was successful, which we can do by checking `req.status_code`.
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
def auth_session(uri, auth, cert):
auth_uri = f'{uri}/user/'
req = requests.post(auth_uri, auth=auth, verify=cert)
return req
```
And we'll call that function from `do_validate_endpoint()`:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
# Test auth connection
try:
response = auth_session(uri, auth, cert)
@ -372,7 +380,8 @@ After completing each operation, run `mvn package -PcollectDependencies -Duser.i
Confirm that everything worked correctly by hopping over to the **Extensibility** tab, selecting **Action Runs** on the left, and changing the **User Runs** filter to say *Integration Runs*.
![Extensibility action runs](e4PTJxfqH.png)
Select the newest `phpIPAM_ValidateEndpoint` action and make sure it has a happy green *Completed* status. You can also review the Inputs to make sure they look like what you expected:
```json {linenos=true}
```json
// torchlight! {"lineNumbers": true}
{
"__metadata": {
"headers": {
@ -399,7 +408,8 @@ That's one operation in the bank!
### Step 6: 'Get IP Ranges' action
So vRA can authenticate against phpIPAM; next, let's actually query to get a list of available IP ranges. This happens in `./src/main/python/get_ip_ranges/source.py`. We'll start by pulling over our `auth_session()` function and flesh it out a bit more to return the authorization token:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
def auth_session(uri, auth, cert):
auth_uri = f'{uri}/user/'
req = requests.post(auth_uri, auth=auth, verify=cert)
@ -409,7 +419,8 @@ def auth_session(uri, auth, cert):
return token
```
We'll then modify `do_get_ip_ranges()` with our needed variables, and then call `auth_session()` to get the necessary token:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
def do_get_ip_ranges(self, auth_credentials, cert):
# Build variables
username = auth_credentials["privateKeyId"]
@ -423,7 +434,8 @@ def do_get_ip_ranges(self, auth_credentials, cert):
token = auth_session(uri, auth, cert)
```
We can then query for the list of subnets, just like we did earlier:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
# Request list of subnets
subnet_uri = f'{uri}/subnets/'
ipRanges = []
@ -434,7 +446,8 @@ I decided to add the extra `filter_by=isPool&filter_value=1` argument to the que
{{% notice note "Update" %}}
I now filter for networks identified by the designated custom field like so:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
# Request list of subnets
subnet_uri = f'{uri}/subnets/'
if enableFilter == "true":
@ -452,7 +465,8 @@ I now filter for networks identified by the designated custom field like so:
Now is a good time to consult [that white paper](https://docs.vmware.com/en/VMware-Cloud-services/1.0/ipam_integration_contract_reqs.pdf) to confirm what fields I'll need to return to vRA. That lets me know that I'll need to return `ipRanges` which is a list of `IpRange` objects. `IpRange` requires `id`, `name`, `startIPAddress`, `endIPAddress`, `ipVersion`, and `subnetPrefixLength` properties. It can also accept `description`, `gatewayAddress`, and `dnsServerAddresses` properties, among others. Some of these properties are returned directly by the phpIPAM API, but others will need to be computed on the fly.
For instance, these are pretty direct matches:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
ipRange['id'] = str(subnet['id'])
ipRange['description'] = str(subnet['description'])
ipRange['subnetPrefixLength'] = str(subnet['mask'])
@ -463,32 +477,37 @@ ipRange['name'] = f"{str(subnet['subnet'])}/{str(subnet['mask'])}"
```
Working with IP addresses in Python can be greatly simplified by use of the `ipaddress` module, so I added an `import ipaddress` statement near the top of the file. I also added it to `requirements.txt` to make sure it gets picked up by the Maven build. I can then use that to figure out the IP version as well as computing reasonable start and end IP addresses:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
network = ipaddress.ip_network(str(subnet['subnet']) + '/' + str(subnet['mask']))
ipRange['ipVersion'] = 'IPv' + str(network.version)
ipRange['startIPAddress'] = str(network[1])
ipRange['endIPAddress'] = str(network[-2])
```
I'd like to try to get the DNS servers from phpIPAM if they're defined, but I also don't want the whole thing to puke if a subnet doesn't have that defined. phpIPAM returns the DNS servers as a semicolon-delineated string; I need them to look like a Python list:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
try:
ipRange['dnsServerAddresses'] = [server.strip() for server in str(subnet['nameservers']['namesrv1']).split(';')]
except:
ipRange['dnsServerAddresses'] = []
```
I can also nest another API request to find which address is marked as the gateway for a given subnet:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
gw_req = requests.get(f"{subnet_uri}/{subnet['id']}/addresses/?filter_by=is_gateway&filter_value=1", headers=token, verify=cert)
if gw_req.status_code == 200:
gateway = gw_req.json()['data'][0]['ip']
ipRange['gatewayAddress'] = gateway
```
And then I merge each of these `ipRange` objects into the `ipRanges` list which will be returned to vRA:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
ipRanges.append(ipRange)
```
After rearranging a bit and tossing in some logging, here's what I've got:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
for subnet in subnets:
ipRange = {}
ipRange['id'] = str(subnet['id'])
@ -523,7 +542,7 @@ The full code can be found [here](https://github.com/jbowdre/phpIPAM-for-vRA8/bl
In any case, it's time to once again use `mvn package -PcollectDependencies -Duser.id=${UID}` to fire off the build, and then import `phpIPAM.zip` into vRA.
vRA runs the `phpIPAM_GetIPRanges` action about every ten minutes so keep checking back on the **Extensibility > Action Runs** view until it shows up. You can then select the action and review the Log to see which IP ranges got picked up:
```log
```
[2021-02-21 23:14:04,026] [INFO] - Querying for auth credentials
[2021-02-21 23:14:04,051] [INFO] - Credentials obtained successfully!
[2021-02-21 23:14:04,089] [INFO] - Found subnet: 172.16.10.0/24 - 1610-Management.
@ -544,7 +563,8 @@ Next, we need to figure out how to allocate an IP.
### Step 7: 'Allocate IP' action
I think we've got a rhythm going now. So we'll dive in to `./src/main/python/allocate_ip/source.py`, create our `auth_session()` function, and add our variables to the `do_allocate_ip()` function. I also created a new `bundle` object to hold the `uri`, `token`, and `cert` items so that I don't have to keep typing those over and over and over.
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
def auth_session(uri, auth, cert):
auth_uri = f'{uri}/user/'
req = requests.post(auth_uri, auth=auth, verify=cert)
@ -571,7 +591,8 @@ def do_allocate_ip(self, auth_credentials, cert):
}
```
I left the remainder of `do_allocate_ip()` intact but modified its calls to other functions so that my new `bundle` would be included:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
allocation_result = []
try:
resource = self.inputs["resourceInfo"]
@ -586,7 +607,8 @@ except Exception as e:
raise e
```
I also added `bundle` to the `allocate()` function:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
def allocate(resource, allocation, context, endpoint, bundle):
last_error = None
@ -603,7 +625,8 @@ def allocate(resource, allocation, context, endpoint, bundle):
raise last_error
```
The heavy lifting is actually handled in `allocate_in_range()`. Right now, my implementation only supports doing a single allocation so I added an escape in case someone asks to do something crazy like allocate *2* IPs. I then set up my variables:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
def allocate_in_range(range_id, resource, allocation, context, endpoint, bundle):
if int(allocation['size']) ==1:
vmName = resource['name']
@ -626,13 +649,15 @@ payload = {
That timestamp will be handy when reviewing the reservations from the phpIPAM side of things. Be sure to add an appropriate `import datetime` statement at the top of this file, and include `datetime` in `requirements.txt`.
So now we'll construct the URI and post the allocation request to phpIPAM. We tell it which `range_id` to use and it will return the first available IP.
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
allocate_uri = f'{uri}/addresses/first_free/{str(range_id)}/'
allocate_req = requests.post(allocate_uri, data=payload, headers=token, verify=cert)
allocate_req = allocate_req.json()
```
Per the white paper, we'll need to return `ipAllocationId`, `ipAddresses`, `ipRangeId`, and `ipVersion` to vRA in an `AllocationResult`. Once again, I'll leverage the `ipaddress` module for figuring the version (and, once again, I'll add it as an import and to the `requirements.txt` file).
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
if allocate_req['success']:
version = ipaddress.ip_address(allocate_req['data']).version
result = {
@ -648,7 +673,8 @@ else:
return result
```
I also implemented a hasty `rollback()` in case something goes wrong and we need to undo the allocation:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
def rollback(allocation_result, bundle):
uri = bundle['uri']
token = bundle['token']
@ -663,7 +689,7 @@ def rollback(allocation_result, bundle):
return
```
The full `allocate_ip` code is [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/allocate_ip/source.py). Once more, run `mvn package -PcollectDependencies -Duser.id=${UID}` and import the new `phpIPAM.zip` package into vRA. You can then open a Cloud Assembly Cloud Template associated with one of the specified networks and hit the "Test" button to see if it works. You should see a new `phpIPAM_AllocateIP` action run appear on the **Extensibility > Action runs** tab. Check the Log for something like this:
```log
```
[2021-02-22 01:31:41,729] [INFO] - Querying for auth credentials
[2021-02-22 01:31:41,757] [INFO] - Credentials obtained successfully!
[2021-02-22 01:31:41,773] [INFO] - Allocating from range 12
@ -676,7 +702,8 @@ Almost done!
### Step 8: 'Deallocate IP' action
The last step is to remove the IP allocation when a vRA deployment gets destroyed. It starts just like the `allocate_ip` action with our `auth_session()` function and variable initialization:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
def auth_session(uri, auth, cert):
auth_uri = f'{uri}/user/'
req = requests.post(auth_uri, auth=auth, verify=cert)
@ -712,7 +739,8 @@ def do_deallocate_ip(self, auth_credentials, cert):
}
```
And the `deallocate()` function is basically a prettier version of the `rollback()` function from the `allocate_ip` action:
```python {linenos=true}
```python
# torchlight! {"lineNumbers": true}
def deallocate(resource, deallocation, bundle):
uri = bundle['uri']
token = bundle['token']
@ -730,13 +758,14 @@ def deallocate(resource, deallocation, bundle):
}
```
You can review the full code [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/deallocate_ip/source.py). Build the package with Maven, import to vRA, and run another test deployment. The `phpIPAM_DeallocateIP` action should complete successfully. Something like this will be in the log:
```log
```
[2021-02-22 01:36:29,438] [INFO] - Querying for auth credentials
[2021-02-22 01:36:29,461] [INFO] - Credentials obtained successfully!
[2021-02-22 01:36:29,476] [INFO] - Deallocating ip 172.16.40.3 from range 12
```
And the Outputs section of the Details tab will show:
```json {linenos=true}
```json
// torchlight! {"lineNumbers": true}
{
"ipDeallocations": [
{