update posts for torchlight

This commit is contained in:
John Bowdre 2023-11-05 17:38:20 -06:00
parent f6cc404866
commit 36335c3774
11 changed files with 527 additions and 475 deletions

View file

@ -22,62 +22,64 @@ I eventually came across [this blog post](https://www.virtualnebula.com/blog/201
### Preparing the SSH host ### Preparing the SSH host
I deployed a Windows Server 2019 Core VM to use as my SSH host, and I joined it to my AD domain as `win02.lab.bowdre.net`. Once that's taken care of, I need to install the RSAT DNS tools so that I can use the `Add-DnsServerResourceRecord` and associated cmdlets. I can do that through PowerShell like so: I deployed a Windows Server 2019 Core VM to use as my SSH host, and I joined it to my AD domain as `win02.lab.bowdre.net`. Once that's taken care of, I need to install the RSAT DNS tools so that I can use the `Add-DnsServerResourceRecord` and associated cmdlets. I can do that through PowerShell like so:
```powershell ```powershell
# Install RSAT DNS tools # Install RSAT DNS tools [tl! .nocopy]
Add-WindowsCapability -online -name Rsat.Dns.Tools~~~~0.0.1.0 Add-WindowsCapability -online -name Rsat.Dns.Tools~~~~0.0.1.0 # [tl! .cmd_pwsh]
``` ```
Instead of using a third-party SSH server, I'll use the OpenSSH Server that's already available in Windows 10 (1809+) and Server 2019: Instead of using a third-party SSH server, I'll use the OpenSSH Server that's already available in Windows 10 (1809+) and Server 2019:
```powershell ```powershell
# Install OpenSSH Server # Install OpenSSH Server [tl! .nocopy]
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 # [tl! .cmd_pwsh]
``` ```
I'll also want to set it so that the default shell upon SSH login is PowerShell (rather than the standard Command Prompt) so that I can have easy access to those DNS cmdlets: I'll also want to set it so that the default shell upon SSH login is PowerShell (rather than the standard Command Prompt) so that I can have easy access to those DNS cmdlets:
```powershell ```powershell
# Set PowerShell as the default Shell (for access to DNS cmdlets) # Set PowerShell as the default Shell (for access to DNS cmdlets) # [tl! .nocopy]
New-ItemProperty -Path "HKLM:\SOFTWARE\OpenSSH" -Name DefaultShell -Value "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -PropertyType String -Force New-ItemProperty -Path "HKLM:\SOFTWARE\OpenSSH" -Name DefaultShell ` # [tl! .cmd_pwsh:2
-Value "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" `
-PropertyType String -Force
``` ```
I'll be using my `lab\vra` service account for managing DNS. I've already given it the appropriate rights on the DNS server, but I'll also add it to the Administrators group on my SSH host: I'll be using my `lab\vra` service account for managing DNS. I've already given it the appropriate rights on the DNS server, but I'll also add it to the Administrators group on my SSH host:
```powershell ```powershell
# Add the service account as a local administrator # Add the service account as a local administrator # [tl! .nocopy]
Add-LocalGroupMember -Group Administrators -Member "lab\vra" Add-LocalGroupMember -Group Administrators -Member "lab\vra" # [tl! .cmd_pwsh]
``` ```
And I'll modify the OpenSSH configuration so that only members of that Administrators group are permitted to log into the server via SSH: And I'll modify the OpenSSH configuration so that only members of that Administrators group are permitted to log into the server via SSH:
```powershell ```powershell
# Restrict SSH access to members in the local Administrators group # Restrict SSH access to members in the local Administrators group [tl! .nocopy]
(Get-Content "C:\ProgramData\ssh\sshd_config") -Replace "# Authentication:", "$&`nAllowGroups Administrators" | Set-Content "C:\ProgramData\ssh\sshd_config" (Get-Content "C:\ProgramData\ssh\sshd_config") -Replace "# Authentication:", `
"$&`nAllowGroups Administrators" | Set-Content "C:\ProgramData\ssh\sshd_config" # [tl! .cmd_pwsh:-1]
``` ```
Finally, I'll start the `sshd` service and set it to start up automatically: Finally, I'll start the `sshd` service and set it to start up automatically:
```powershell ```powershell
# Start service and set it to automatic # Start service and set it to automatic [tl! .nocopy]
Set-Service -Name sshd -StartupType Automatic -Status Running Set-Service -Name sshd -StartupType Automatic -Status Running # [tl! .cmd_pwsh]
``` ```
#### A quick test #### A quick test
At this point, I can log in to the server via SSH and confirm that I can create and delete records in my DNS zone: At this point, I can log in to the server via SSH and confirm that I can create and delete records in my DNS zone:
```powershell ```powershell
ssh vra@win02.lab.bowdre.net ssh vra@win02.lab.bowdre.net # [tl! .cmd_pwsh]
vra@win02.lab.bowdre.net's password: vra@win02.lab.bowdre.net`'s password: # [tl! .nocopy:3]
Windows PowerShell Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved. Copyright (C) Microsoft Corporation. All rights reserved.
Add-DnsServerResourceRecordA -ComputerName win01.lab.bowdre.net `
PS C:\Users\vra> Add-DnsServerResourceRecordA -ComputerName win01.lab.bowdre.net -Name testy -ZoneName lab.bowdre.net -AllowUpdateAny -IPv4Address 172.16.99.99 -Name testy -ZoneName lab.bowdre.net -AllowUpdateAny -IPv4Address 172.16.99.99 # [tl! .cmd_pwsh:-1]
nslookup testy # [tl! .cmd_pwsh]
PS C:\Users\vra> nslookup testy Server: win01.lab.bowdre.net # [tl! .nocopy:start]
Server: win01.lab.bowdre.net
Address: 192.168.1.5 Address: 192.168.1.5
Name: testy.lab.bowdre.net Name: testy.lab.bowdre.net
Address: 172.16.99.99 Address: 172.16.99.99
# [tl! .nocopy:end]
PS C:\Users\vra> Remove-DnsServerResourceRecord -ComputerName win01.lab.bowdre.net -Name testy -ZoneName lab.bowdre.net -RRType A -Force Remove-DnsServerResourceRecord -ComputerName win01.lab.bowdre.net `
-Name testy -ZoneName lab.bowdre.net -RRType A -Force # [tl! .cmd_pwsh:-1]
PS C:\Users\vra> nslookup testy nslookup testy # [tl! .cmd_pwsh]
Server: win01.lab.bowdre.net Server: win01.lab.bowdre.net # [tl! .nocopy:3]
Address: 192.168.1.5 Address: 192.168.1.5
*** win01.lab.bowdre.net can't find testy: Non-existent domain *** win01.lab.bowdre.net can't find testy: Non-existent domain
@ -111,23 +113,24 @@ resources:
``` ```
So here's the complete cloud template that I've been working on: So here's the complete cloud template that I've been working on:
```yaml {linenos=true} ```yaml
formatVersion: 1 # torchlight! {"lineNumbers": true}
formatVersion: 1 # [tl! focus:1]
inputs: inputs:
site: site: # [tl! collapse:5]
type: string type: string
title: Site title: Site
enum: enum:
- BOW - BOW
- DRE - DRE
image: image: # [tl! collapse:6]
type: string type: string
title: Operating System title: Operating System
oneOf: oneOf:
- title: Windows Server 2019 - title: Windows Server 2019
const: ws2019 const: ws2019
default: ws2019 default: ws2019
size: size: # [tl! collapse:10]
title: Resource Size title: Resource Size
type: string type: string
oneOf: oneOf:
@ -138,18 +141,18 @@ inputs:
- title: 'Small [2vCPU|2GB]' - title: 'Small [2vCPU|2GB]'
const: small const: small
default: small default: small
network: network: # [tl! collapse:2]
title: Network title: Network
type: string type: string
adJoin: adJoin: # [tl! collapse:3]
title: Join to AD domain title: Join to AD domain
type: boolean type: boolean
default: true default: true
staticDns: staticDns: # [tl! highlight:3 focus:3]
title: Create static DNS record title: Create static DNS record
type: boolean type: boolean
default: false default: false
environment: environment: # [tl! collapse:10]
type: string type: string
title: Environment title: Environment
oneOf: oneOf:
@ -160,7 +163,7 @@ inputs:
- title: Production - title: Production
const: P const: P
default: D default: D
function: function: # [tl! collapse:14]
type: string type: string
title: Function Code title: Function Code
oneOf: oneOf:
@ -175,34 +178,34 @@ inputs:
- title: Testing (TST) - title: Testing (TST)
const: TST const: TST
default: TST default: TST
app: app: # [tl! collapse:5]
type: string type: string
title: Application Code title: Application Code
minLength: 3 minLength: 3
maxLength: 3 maxLength: 3
default: xxx default: xxx
description: description: # [tl! collapse:4]
type: string type: string
title: Description title: Description
description: Server function/purpose description: Server function/purpose
default: Testing and evaluation default: Testing and evaluation
poc_name: poc_name: # [tl! collapse:3]
type: string type: string
title: Point of Contact Name title: Point of Contact Name
default: Jack Shephard default: Jack Shephard
poc_email: poc_email: # [tl! collapse:4]
type: string type: string
title: Point of Contact Email title: Point of Contact Email
default: jack.shephard@virtuallypotato.com default: username@example.com
pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$' pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$'
ticket: ticket: # [tl! collapse:3]
type: string type: string
title: Ticket/Request Number title: Ticket/Request Number
default: 4815162342 default: 4815162342
resources: resources: # [tl! focus:3]
Cloud_vSphere_Machine_1: Cloud_vSphere_Machine_1:
type: Cloud.vSphere.Machine type: Cloud.vSphere.Machine
properties: properties: # [tl! collapse:start]
image: '${input.image}' image: '${input.image}'
flavor: '${input.size}' flavor: '${input.size}'
site: '${input.site}' site: '${input.site}'
@ -212,9 +215,9 @@ resources:
ignoreActiveDirectory: '${!input.adJoin}' ignoreActiveDirectory: '${!input.adJoin}'
activeDirectory: activeDirectory:
relativeDN: '${"OU=Servers,OU=Computers,OU=" + input.site + ",OU=LAB"}' relativeDN: '${"OU=Servers,OU=Computers,OU=" + input.site + ",OU=LAB"}'
customizationSpec: '${input.adJoin ? "vra-win-domain" : "vra-win-workgroup"}' customizationSpec: '${input.adJoin ? "vra-win-domain" : "vra-win-workgroup"}' # [tl! collapse:end]
staticDns: '${input.staticDns}' staticDns: '${input.staticDns}' # [tl! focus highlight]
dnsDomain: lab.bowdre.net dnsDomain: lab.bowdre.net # [tl! collapse:start]
poc: '${input.poc_name + " (" + input.poc_email + ")"}' poc: '${input.poc_name + " (" + input.poc_email + ")"}'
ticket: '${input.ticket}' ticket: '${input.ticket}'
description: '${input.description}' description: '${input.description}'
@ -222,10 +225,10 @@ resources:
- network: '${resource.Cloud_vSphere_Network_1.id}' - network: '${resource.Cloud_vSphere_Network_1.id}'
assignment: static assignment: static
constraints: constraints:
- tag: 'comp:${to_lower(input.site)}' - tag: 'comp:${to_lower(input.site)}' # [tl! collapse:end]
Cloud_vSphere_Network_1: Cloud_vSphere_Network_1:
type: Cloud.vSphere.Network type: Cloud.vSphere.Network
properties: properties: # [tl! collapse:3]
networkType: existing networkType: existing
constraints: constraints:
- tag: 'net:${input.network}' - tag: 'net:${input.network}'
@ -280,9 +283,12 @@ Now we're ready for the good part: inserting a new scriptable task into the work
![Task inputs](20210809_task_inputs.png) ![Task inputs](20210809_task_inputs.png)
And here's the JavaScript for the task: And here's the JavaScript for the task:
```js {linenos=true} ```javascript
// torchlight! {"lineNumbers": true}
// JavaScript: Create DNS Record task // JavaScript: Create DNS Record task
// Inputs: inputProperties (Properties), dnsServers (Array/string), sshHost (string), sshUser (string), sshPass (secureString), supportedDomains (Array/string) // Inputs: inputProperties (Properties), dnsServers (Array/string),
// sshHost (string), sshUser (string), sshPass (secureString),
// supportedDomains (Array/string)
// Outputs: None // Outputs: None
var staticDns = inputProperties.customProperties.staticDns; var staticDns = inputProperties.customProperties.staticDns;
@ -341,9 +347,12 @@ The schema will include a single scriptable task:
And it's going to be *pretty damn similar* to the other one: And it's going to be *pretty damn similar* to the other one:
```js {linenos=true} ```javascript
// torchlight! {"lineNumbers": true}
// JavaScript: Delete DNS Record task // JavaScript: Delete DNS Record task
// Inputs: inputProperties (Properties), dnsServers (Array/string), sshHost (string), sshUser (string), sshPass (secureString), supportedDomains (Array/string) // Inputs: inputProperties (Properties), dnsServers (Array/string),
// sshHost (string), sshUser (string), sshPass (secureString),
// supportedDomains (Array/string)
// Outputs: None // Outputs: None
var staticDns = inputProperties.customProperties.staticDns; var staticDns = inputProperties.customProperties.staticDns;
@ -396,9 +405,9 @@ Once the deployment completes, I go back into vRO, find the most recent item in
![Workflow success!](20210813_workflow_success.png) ![Workflow success!](20210813_workflow_success.png)
And I can run a quick query to make sure that name actually resolves: And I can run a quick query to make sure that name actually resolves:
```command-session ```shell
dig +short bow-ttst-xxx023.lab.bowdre.net A dig +short bow-ttst-xxx023.lab.bowdre.net A # [tl! .cmd]
172.16.30.10 172.16.30.10 # [tl! .nocopy]
``` ```
It works! It works!
@ -410,8 +419,8 @@ Again, I'll check the **Workflow Runs** in vRO to see that the deprovisioning ta
![VM Deprovisioning workflow](20210813_workflow_deletion.png) ![VM Deprovisioning workflow](20210813_workflow_deletion.png)
And I can `dig` a little more to make sure the name doesn't resolve anymore: And I can `dig` a little more to make sure the name doesn't resolve anymore:
```command-session ```shell
dig +short bow-ttst-xxx023.lab.bowdre.net A dig +short bow-ttst-xxx023.lab.bowdre.net A # [tl! .cmd]
``` ```

View file

@ -19,8 +19,8 @@ Here's how.
#### Step Zero: Prereqs #### Step Zero: Prereqs
You'll need Windows 10 1903 build 18362 or newer (on x64). You can check by running `ver` from a Command Prompt: You'll need Windows 10 1903 build 18362 or newer (on x64). You can check by running `ver` from a Command Prompt:
```powershell ```powershell
C:\> ver ver # [tl! .cmd_pwsh]
Microsoft Windows [Version 10.0.18363.1082] Microsoft Windows [Version 10.0.18363.1082] # [tl! .nocopy]
``` ```
We're interested in that third set of numbers. 18363 is bigger than 18362 so we're good to go! We're interested in that third set of numbers. 18363 is bigger than 18362 so we're good to go!
@ -28,13 +28,13 @@ We're interested in that third set of numbers. 18363 is bigger than 18362 so we'
*(Not needed if you've already been using WSL1.)* *(Not needed if you've already been using WSL1.)*
You can do this by dropping the following into an elevated Powershell prompt: You can do this by dropping the following into an elevated Powershell prompt:
```powershell ```powershell
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart # [tl! .cmd_pwsh]
``` ```
#### Step Two: Enable the Virtual Machine Platform feature #### Step Two: Enable the Virtual Machine Platform feature
Drop this in an elevated Powershell: Drop this in an elevated Powershell:
```powershell ```powershell
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart # [tl! .cmd_pwsh]
``` ```
And then reboot (this is still Windows, after all). And then reboot (this is still Windows, after all).
@ -44,22 +44,22 @@ Download it from [here](https://wslstorestorage.blob.core.windows.net/wslblob/ws
#### Step Four: Set WSL2 as your default #### Step Four: Set WSL2 as your default
Open a Powershell window and run: Open a Powershell window and run:
```powershell ```powershell
wsl --set-default-version 2 wsl --set-default-version 2 # [tl! .cmd_pwsh]
``` ```
#### Step Five: Install a Linux distro, or upgrade an existing one #### Step Five: Install a Linux distro, or upgrade an existing one
If you're brand new to this WSL thing, head over to the [Microsoft Store](https://aka.ms/wslstore) and download your favorite Linux distribution. Once it's installed, launch it and you'll be prompted to set up a Linux username and password. If you're brand new to this WSL thing, head over to the [Microsoft Store](https://aka.ms/wslstore) and download your favorite Linux distribution. Once it's installed, launch it and you'll be prompted to set up a Linux username and password.
If you've already got a WSL1 distro installed, first run `wsl -l -v` in Powershell to make sure you know the distro name: If you've already got a WSL1 distro installed, first run `wsl -l -v` in Powershell to make sure you know the distro name:
```powershell ```powershell
PS C:\Users\jbowdre> wsl -l -v wsl -l -v # [tl! .cmd_pwsh]
NAME STATE VERSION NAME STATE VERSION # [tl! .nocopy:1]
* Debian Running 2 * Debian Running 2
``` ```
And then upgrade the distro to WSL2 with `wsl --set-version <distro_name> 2`: And then upgrade the distro to WSL2 with `wsl --set-version <distro_name> 2`:
```powershell ```powershell
PS C:\Users\jbowdre> wsl --set-version Debian 2 PS C:\Users\jbowdre> wsl --set-version Debian 2 # [tl! .cmd_pwsh]
Conversion in progress, this may take a few minutes... Conversion in progress, this may take a few minutes... # [tl! .nocopy]
``` ```
Cool! Cool!

View file

@ -42,12 +42,13 @@ I'm going to use the [Docker setup](https://docs.ntfy.sh/install/#docker) on a s
#### Ntfy in Docker #### Ntfy in Docker
So I'll start by creating a new directory at `/opt/ntfy/` to hold the goods, and create a compose config. So I'll start by creating a new directory at `/opt/ntfy/` to hold the goods, and create a compose config.
```command ```shell
sudo mkdir -p /opt/ntfy sudo mkdir -p /opt/ntfy # [tl! .cmd:1]
sudo vim /opt/ntfy/docker-compose.yml sudo vim /opt/ntfy/docker-compose.yml
``` ```
```yaml {linenos=true} ```yaml
# torchlight! {"lineNumbers": true}
# /opt/ntfy/docker-compose.yml # /opt/ntfy/docker-compose.yml
version: "2.3" version: "2.3"
@ -81,21 +82,22 @@ This config will create/mount folders in the working directory to store the ntfy
I can go ahead and bring it up: I can go ahead and bring it up:
```command-session ```shell
sudo docker-compose up -d sudo docker-compose up -d # [tl! focus:start .cmd]
Creating network "ntfy_default" with the default driver Creating network "ntfy_default" with the default driver # [tl! .nocopy:start]
Pulling ntfy (binwiederhier/ntfy:)... Pulling ntfy (binwiederhier/ntfy:)...
latest: Pulling from binwiederhier/ntfy latest: Pulling from binwiederhier/ntfy # [tl! focus:end]
7264a8db6415: Pull complete 7264a8db6415: Pull complete
1ac6a3b2d03b: Pull complete 1ac6a3b2d03b: Pull complete
Digest: sha256:da08556da89a3f7317557fd39cf302c6e4691b4f8ce3a68aa7be86c4141e11c8 Digest: sha256:da08556da89a3f7317557fd39cf302c6e4691b4f8ce3a68aa7be86c4141e11c8
Status: Downloaded newer image for binwiederhier/ntfy:latest Status: Downloaded newer image for binwiederhier/ntfy:latest # [tl! focus:1]
Creating ntfy ... done Creating ntfy ... done # [tl! .nocopy:end]
``` ```
#### Caddy Reverse Proxy #### Caddy Reverse Proxy
I'll also want to add [the following](https://docs.ntfy.sh/config/#nginxapache2caddy) to my Caddy config: I'll also want to add [the following](https://docs.ntfy.sh/config/#nginxapache2caddy) to my Caddy config:
```caddyfile {linenos=true} ```text
# torchlight! {"lineNumbers": true}
# /etc/caddy/Caddyfile # /etc/caddy/Caddyfile
ntfy.runtimeterror.dev, http://ntfy.runtimeterror.dev { ntfy.runtimeterror.dev, http://ntfy.runtimeterror.dev {
reverse_proxy localhost:2586 reverse_proxy localhost:2586
@ -112,8 +114,8 @@ ntfy.runtimeterror.dev, http://ntfy.runtimeterror.dev {
``` ```
And I'll restart Caddy to apply the config: And I'll restart Caddy to apply the config:
```command ```shell
sudo systemctl restart caddy sudo systemctl restart caddy # [tl! .cmd]
``` ```
Now I can point my browser to `https://ntfy.runtimeterror.dev` and see the web interface: Now I can point my browser to `https://ntfy.runtimeterror.dev` and see the web interface:
@ -124,9 +126,9 @@ I can subscribe to a new topic:
![Subscribing to a public topic](subscribe_public_topic.png) ![Subscribing to a public topic](subscribe_public_topic.png)
And publish a message to it: And publish a message to it:
```command-session ```curl
curl -d "Hi" https://ntfy.runtimeterror.dev/testy curl -d "Hi" https://ntfy.runtimeterror.dev/testy # [tl! .cmd]
{"id":"80bUl6cKwgBP","time":1694981305,"expires":1695024505,"event":"message","topic":"testy","message":"Hi"} {"id":"80bUl6cKwgBP","time":1694981305,"expires":1695024505,"event":"message","topic":"testy","message":"Hi"} # [tl! .nocopy]
``` ```
Which will then show up as a notification in my browser: Which will then show up as a notification in my browser:
@ -138,6 +140,7 @@ So now I've got my own ntfy server, and I've verified that it works for unauthen
I'll start by creating a `server.yml` config file which will be mounted into the container. This config will specify where to store the user database and switch the default ACL to `deny-all`: I'll start by creating a `server.yml` config file which will be mounted into the container. This config will specify where to store the user database and switch the default ACL to `deny-all`:
```yaml ```yaml
# torchlight! {"lineNumbers": true}
# /opt/ntfy/etc/ntfy/server.yml # /opt/ntfy/etc/ntfy/server.yml
auth-file: "/var/lib/ntfy/user.db" auth-file: "/var/lib/ntfy/user.db"
auth-default-access: "deny-all" auth-default-access: "deny-all"
@ -145,8 +148,8 @@ base-url: "https://ntfy.runtimeterror.dev"
``` ```
I can then restart the container, and try again to subscribe to the same (or any other topic): I can then restart the container, and try again to subscribe to the same (or any other topic):
```command ```shell
sudo docker-compose down && sudo docker-compose up -d sudo docker-compose down && sudo docker-compose up -d # [tl! .cmd]
``` ```
@ -154,36 +157,34 @@ Now I get prompted to log in:
![Login prompt](login_required.png) ![Login prompt](login_required.png)
I'll need to use the ntfy CLI to create/manage entries in the user DB, and that means first grabbing a shell inside the container: I'll need to use the ntfy CLI to create/manage entries in the user DB, and that means first grabbing a shell inside the container:
```command ```shell
sudo docker exec -it ntfy /bin/sh sudo docker exec -it ntfy /bin/sh # [tl! .cmd]
``` ```
For now, I'm going to create three users: one as an administrator, one as a "writer", and one as a "reader". I'll be prompted for a password for each: For now, I'm going to create three users: one as an administrator, one as a "writer", and one as a "reader". I'll be prompted for a password for each:
```command-session ```shell
ntfy user add --role=admin administrator ntfy user add --role=admin administrator # [tl! .cmd]
user administrator added with role admin user administrator added with role admin # [tl! .nocopy:1]
```
```command-session ntfy user add writer # [tl! .cmd]
ntfy user add writer user writer added with role user # [tl! .nocopy:1]
user writer added with role user
``` ntfy user add reader # [tl! .cmd]
```command-session user reader added with role user # [tl! .nocopy]
ntfy user add reader
user reader added with role user
``` ```
The admin user has global read+write access, but right now the other two can't do anything. Let's make it so that `writer` can write to all topics, and `reader` can read from all topics: The admin user has global read+write access, but right now the other two can't do anything. Let's make it so that `writer` can write to all topics, and `reader` can read from all topics:
```command ```shell
ntfy access writer '*' write ntfy access writer '*' write # [tl! .cmd:1]
ntfy access reader '*' read ntfy access reader '*' read
``` ```
I could lock these down further by selecting specific topic names instead of `'*'` but this will do fine for now. I could lock these down further by selecting specific topic names instead of `'*'` but this will do fine for now.
Let's go ahead and verify the access as well: Let's go ahead and verify the access as well:
```command-session ```shell
ntfy access ntfy access # [tl! .cmd]
user administrator (role: admin, tier: none) user administrator (role: admin, tier: none) # [tl! .nocopy:8]
- read-write access to all topics (admin role) - read-write access to all topics (admin role)
user reader (role: user, tier: none) user reader (role: user, tier: none)
- read-only access to topic * - read-only access to topic *
@ -195,17 +196,17 @@ user * (role: anonymous, tier: none)
``` ```
While I'm at it, I also want to configure an access token to be used with the `writer` account. I'll be able to use that instead of username+password when publishing messages. While I'm at it, I also want to configure an access token to be used with the `writer` account. I'll be able to use that instead of username+password when publishing messages.
```command-session ```shell
ntfy token add writer ntfy token add writer # [tl! .cmd]
token tk_mm8o6cwxmox11wrnh8miehtivxk7m created for user writer, never expires token tk_mm8o6cwxmox11wrnh8miehtivxk7m created for user writer, never expires # [tl! .nocopy]
``` ```
I can go back to the web, subscribe to the `testy` topic again using the `reader` credentials, and then test sending an authenticated notification with `curl`: I can go back to the web, subscribe to the `testy` topic again using the `reader` credentials, and then test sending an authenticated notification with `curl`:
```command-session ```curl
curl -H "Authorization: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m" \ curl -H "Authorization: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m" \ # [tl! .cmd]
-d "Once more, with auth!" \ -d "Once more, with auth!" \
https://ntfy.runtimeterror.dev/testy https://ntfy.runtimeterror.dev/testy
{"id":"0dmX9emtehHe","time":1694987274,"expires":1695030474,"event":"message","topic":"testy","message":"Once more, with auth!"} {"id":"0dmX9emtehHe","time":1694987274,"expires":1695030474,"event":"message","topic":"testy","message":"Once more, with auth!"} # [tl! .nocopy]
``` ```
![Authenticated notification](authenticated_notification.png) ![Authenticated notification](authenticated_notification.png)
@ -222,6 +223,7 @@ I may want to wind up having servers notify for a variety of conditions so I'll
`/usr/local/bin/ntfy_push.sh`: `/usr/local/bin/ntfy_push.sh`:
```shell ```shell
# torchlight! {"lineNumbers": true}
#!/usr/bin/env bash #!/usr/bin/env bash
curl \ curl \
@ -234,8 +236,8 @@ curl \
Note that I'm using a new topic name now: `server_alerts`. Topics are automatically created when messages are posted to them. I just need to make sure to subscribe to the topic in the web UI (or mobile app) so that I can receive these notifications. Note that I'm using a new topic name now: `server_alerts`. Topics are automatically created when messages are posted to them. I just need to make sure to subscribe to the topic in the web UI (or mobile app) so that I can receive these notifications.
Okay, now let's make it executable and then give it a quick test: Okay, now let's make it executable and then give it a quick test:
```command ```shell
chmod +x /usr/local/bin/ntfy_push.sh chmod +x /usr/local/bin/ntfy_push.sh # [tl! .cmd:1]
/usr/local/bin/ntfy_push.sh "Script Test" "This is a test from the magic script I just wrote." /usr/local/bin/ntfy_push.sh "Script Test" "This is a test from the magic script I just wrote."
``` ```
@ -246,6 +248,7 @@ I don't know an easy way to tell a systemd service definition to pass arguments
`/usr/local/bin/ntfy_boot_complete.sh`: `/usr/local/bin/ntfy_boot_complete.sh`:
```shell ```shell
# torchlight! {"lineNumbers": true}
#!/usr/bin/env bash #!/usr/bin/env bash
TITLE="$(hostname -s)" TITLE="$(hostname -s)"
@ -255,14 +258,15 @@ MESSAGE="System boot complete"
``` ```
And this one should be executable as well: And this one should be executable as well:
```command ```shell
chmod +x /usr/local/bin/ntfy_boot_complete.sh chmod +x /usr/local/bin/ntfy_boot_complete.sh # [tl! .cmd]
``` ```
##### Service Definition ##### Service Definition
Finally I can create and register the service definition so that the script will run at each system boot. Finally I can create and register the service definition so that the script will run at each system boot.
`/etc/systemd/system/ntfy_boot_complete.service`: `/etc/systemd/system/ntfy_boot_complete.service`:
```cfg ```ini
# torchlight! {"lineNumbers": true}
[Unit] [Unit]
After=network.target After=network.target
@ -273,8 +277,8 @@ ExecStart=/usr/local/bin/ntfy_boot_complete.sh
WantedBy=default.target WantedBy=default.target
``` ```
```command ```shell
sudo systemctl daemon-reload sudo systemctl daemon-reload # [tl! .cmd:1]
sudo systemctl enable --now ntfy_boot_complete.service sudo systemctl enable --now ntfy_boot_complete.service
``` ```
@ -292,7 +296,8 @@ Enabling ntfy as a notification handler is pretty straight-forward, and it will
##### Notify Configuration ##### Notify Configuration
I'll add ntfy to Home Assistant by using the [RESTful Notifications](https://www.home-assistant.io/integrations/notify.rest/) integration. For that, I just need to update my instance's `configuration.yaml` to configure the connection. I'll add ntfy to Home Assistant by using the [RESTful Notifications](https://www.home-assistant.io/integrations/notify.rest/) integration. For that, I just need to update my instance's `configuration.yaml` to configure the connection.
```yaml {linenos=true} ```yaml
# torchlight! {"lineNumbers": true}
# configuration.yaml # configuration.yaml
notify: notify:
- name: ntfy - name: ntfy
@ -309,6 +314,7 @@ notify:
The `Authorization` line references a secret stored in `secrets.yaml`: The `Authorization` line references a secret stored in `secrets.yaml`:
```yaml ```yaml
# torchlight! {"lineNumbers": true}
# secrets.yaml # secrets.yaml
ntfy_token: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m ntfy_token: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m
``` ```
@ -327,6 +333,7 @@ I'll use the Home Assistant UI to push a notification through ntfy if any of my
The business end of this is the service call at the end: The business end of this is the service call at the end:
```yaml ```yaml
# torchlight! {"lineNumbers": true}
service: notify.ntfy service: notify.ntfy
data: data:
title: Leak detected! title: Leak detected!

View file

@ -51,14 +51,14 @@ Running `tanzu completion --help` will tell you what's needed, and you can just
``` ```
So to get the completions to load automatically whenever you start a `bash` shell, run: So to get the completions to load automatically whenever you start a `bash` shell, run:
```command ```shell
tanzu completion bash > $HOME/.tanzu/completion.bash.inc tanzu completion bash > $HOME/.tanzu/completion.bash.inc # [tl! .cmd:1]
printf "\n# Tanzu shell completion\nsource '$HOME/.tanzu/completion.bash.inc'\n" >> $HOME/.bash_profile printf "\n# Tanzu shell completion\nsource '$HOME/.tanzu/completion.bash.inc'\n" >> $HOME/.bash_profile
``` ```
For a `zsh` shell, it's: For a `zsh` shell, it's:
```command ```shell
echo "autoload -U compinit; compinit" >> ~/.zshrc echo "autoload -U compinit; compinit" >> ~/.zshrc # [tl! .cmd:1]
tanzu completion zsh > "${fpath[1]}/_tanzu" tanzu completion zsh > "${fpath[1]}/_tanzu"
``` ```

View file

@ -85,8 +85,8 @@ Let's start with the gear (hardware and software) I needed to make this work:
The very first task is to write the required firmware image (download [here](https://github.com/jaredmcneill/quartz64_uefi/releases)) to a micro SD card. I used a 64GB card that I had lying around but you could easily get by with a *much* smaller one; the firmware image is tiny, and the card can't be used for storing anything else. Since I'm doing this on a Chromebook, I'll be using the [Chromebook Recovery Utility (CRU)](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm) for writing the images to external storage as described [in another post](/burn-an-iso-to-usb-with-the-chromebook-recovery-utility/). The very first task is to write the required firmware image (download [here](https://github.com/jaredmcneill/quartz64_uefi/releases)) to a micro SD card. I used a 64GB card that I had lying around but you could easily get by with a *much* smaller one; the firmware image is tiny, and the card can't be used for storing anything else. Since I'm doing this on a Chromebook, I'll be using the [Chromebook Recovery Utility (CRU)](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm) for writing the images to external storage as described [in another post](/burn-an-iso-to-usb-with-the-chromebook-recovery-utility/).
After downloading [`QUARTZ64_EFI.img.gz`](https://github.com/jaredmcneill/quartz64_uefi/releases/download/2022-07-20/QUARTZ64_EFI.img.gz), I need to get it into a format recognized by CRU and, in this case, that means extracting the gzipped archive and then compressing the `.img` file into a standard `.zip`: After downloading [`QUARTZ64_EFI.img.gz`](https://github.com/jaredmcneill/quartz64_uefi/releases/download/2022-07-20/QUARTZ64_EFI.img.gz), I need to get it into a format recognized by CRU and, in this case, that means extracting the gzipped archive and then compressing the `.img` file into a standard `.zip`:
```command ```shell
gunzip QUARTZ64_EFI.img.gz gunzip QUARTZ64_EFI.img.gz # [tl! .cmd:1]
zip QUARTZ64_EFI.img.zip QUARTZ64_EFI.img zip QUARTZ64_EFI.img.zip QUARTZ64_EFI.img
``` ```
@ -98,8 +98,8 @@ I can then write it to the micro SD card by opening CRU, clicking on the gear ic
I'll also need to prepare the ESXi installation media (download [here](https://customerconnect.vmware.com/downloads/get-download?downloadGroup=ESXI-ARM)). For that, I'll be using a 256GB USB drive. Due to the limited storage options on the Quartz64, I'll be installing ESXi onto the same drive I use to boot the installer so, in this case, the more storage the better. By default, ESXi 7.0 will consume up to 128GB for the new `ESX-OSData` partition; whatever is leftover will be made available as a VMFS datastore. That could be problematic given the unavailable/flaky USB support of the Quartz64. (While you *can* install ESXi onto a smaller drive, down to about ~20GB, the lack of additional storage on this hardware makes it pretty important to take advantage of as much space as you can.) I'll also need to prepare the ESXi installation media (download [here](https://customerconnect.vmware.com/downloads/get-download?downloadGroup=ESXI-ARM)). For that, I'll be using a 256GB USB drive. Due to the limited storage options on the Quartz64, I'll be installing ESXi onto the same drive I use to boot the installer so, in this case, the more storage the better. By default, ESXi 7.0 will consume up to 128GB for the new `ESX-OSData` partition; whatever is leftover will be made available as a VMFS datastore. That could be problematic given the unavailable/flaky USB support of the Quartz64. (While you *can* install ESXi onto a smaller drive, down to about ~20GB, the lack of additional storage on this hardware makes it pretty important to take advantage of as much space as you can.)
In any case, to make the downloaded `VMware-VMvisor-Installer-7.0-20133114.aarch64.iso` writeable with CRU all I need to do is add `.bin` to the end of the filename: In any case, to make the downloaded `VMware-VMvisor-Installer-7.0-20133114.aarch64.iso` writeable with CRU all I need to do is add `.bin` to the end of the filename:
```command ```shell
mv VMware-VMvisor-Installer-7.0-20133114.aarch64.iso{,.bin} mv VMware-VMvisor-Installer-7.0-20133114.aarch64.iso{,.bin} # [tl! .cmd]
``` ```
Then it's time to write the image onto the USB drive: Then it's time to write the image onto the USB drive:
@ -201,13 +201,13 @@ As I mentioned earlier, my initial goal is to deploy a Tailscale node on my new
#### Deploying Photon OS #### Deploying Photon OS
VMware provides Photon in a few different formats, as described on the [download page](https://github.com/vmware/photon/wiki/Downloading-Photon-OS). I'm going to use the "OVA with virtual hardware v13 arm64" version so I'll kick off that download of `photon_uefi.ova`. I'm actually going to download that file straight to my `deb01` Linux VM: VMware provides Photon in a few different formats, as described on the [download page](https://github.com/vmware/photon/wiki/Downloading-Photon-OS). I'm going to use the "OVA with virtual hardware v13 arm64" version so I'll kick off that download of `photon_uefi.ova`. I'm actually going to download that file straight to my `deb01` Linux VM:
```command ```shell
wget https://packages.vmware.com/photon/4.0/Rev2/ova/photon_uefi.ova wget https://packages.vmware.com/photon/4.0/Rev2/ova/photon_uefi.ova # [tl! .cmd]
``` ```
and then spawn a quick Python web server to share it out: and then spawn a quick Python web server to share it out:
```command-session ```shell
python3 -m http.server python3 -m http.server # [tl! .cmd]
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ... Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ... # [tl! .nocopy]
``` ```
That will let me deploy from a resource already inside my lab network instead of transferring the OVA from my laptop. So now I can go back to my vSphere Client and go through the steps to **Deploy OVF Template** to the new host, and I'll plug in the URL `http://deb01.lab.bowdre.net:8000/photon_uefi.ova`: That will let me deploy from a resource already inside my lab network instead of transferring the OVA from my laptop. So now I can go back to my vSphere Client and go through the steps to **Deploy OVF Template** to the new host, and I'll plug in the URL `http://deb01.lab.bowdre.net:8000/photon_uefi.ova`:
@ -232,13 +232,13 @@ The default password for Photon's `root` user is `changeme`. You'll be forced to
![First login, and the requisite password change](first_login.png) ![First login, and the requisite password change](first_login.png)
Now that I'm in, I'll set the hostname appropriately: Now that I'm in, I'll set the hostname appropriately:
```commandroot ```shell
hostnamectl set-hostname pho01 hostnamectl set-hostname pho01 # [tl! .cmd_root]
``` ```
For now, the VM pulled an IP from DHCP but I would like to configure that statically instead. To do that, I'll create a new interface file: For now, the VM pulled an IP from DHCP but I would like to configure that statically instead. To do that, I'll create a new interface file:
```commandroot-session ```shell
cat > /etc/systemd/network/10-static-en.network << "EOF" cat > /etc/systemd/network/10-static-en.network << "EOF" # [tl! .cmd_root]
[Match] [Match]
Name = eth0 Name = eth0
@ -251,33 +251,31 @@ DHCP = no
IPForward = yes IPForward = yes
EOF EOF
```
```commandroot chmod 644 /etc/systemd/network/10-static-en.network # [tl! .cmd_root:1]
chmod 644 /etc/systemd/network/10-static-en.network
systemctl restart systemd-networkd systemctl restart systemd-networkd
``` ```
I'm including `IPForward = yes` to [enable IP forwarding](https://tailscale.com/kb/1104/enable-ip-forwarding/) for Tailscale. I'm including `IPForward = yes` to [enable IP forwarding](https://tailscale.com/kb/1104/enable-ip-forwarding/) for Tailscale.
With networking sorted, it's probably a good idea to check for and apply any available updates: With networking sorted, it's probably a good idea to check for and apply any available updates:
```commandroot ```shell
tdnf update -y tdnf update -y # [tl! .cmd_root]
``` ```
I'll also go ahead and create a normal user account (with sudo privileges) for me to use: I'll also go ahead and create a normal user account (with sudo privileges) for me to use:
```commandroot ```shell
useradd -G wheel -m john useradd -G wheel -m john # [tl! .cmd_root:1]
passwd john passwd john
``` ```
Now I can use SSH to connect to the VM and ditch the web console: Now I can use SSH to connect to the VM and ditch the web console:
```command-session ```shell
ssh pho01.lab.bowdre.net ssh pho01.lab.bowdre.net # [tl! .cmd]
Password: Password: # [tl! .nocopy]
```
```command-session
sudo whoami
sudo whoami # [tl! .cmd]
# [tl! .nocopy:start]
We trust you have received the usual lecture from the local System We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things: Administrator. It usually boils down to these three things:
@ -286,7 +284,7 @@ Administrator. It usually boils down to these three things:
#3) With great power comes great responsibility. #3) With great power comes great responsibility.
[sudo] password for john [sudo] password for john
root root # [tl! .nocopy:end]
``` ```
Looking good! I'll now move on to the justification[^justification] for this entire exercise: Looking good! I'll now move on to the justification[^justification] for this entire exercise:
@ -295,45 +293,42 @@ Looking good! I'll now move on to the justification[^justification] for this ent
#### Installing Tailscale #### Installing Tailscale
If I *weren't* doing this on hard mode, I could use Tailscale's [install script](https://tailscale.com/download) like I do on every other Linux system. Hard mode is what I do though, and the installer doesn't directly support Photon OS. I'll instead consult the [manual install instructions](https://tailscale.com/download/linux/static) which tell me to download the appropriate binaries from [https://pkgs.tailscale.com/stable/#static](https://pkgs.tailscale.com/stable/#static). So I'll grab the link for the latest `arm64` build and pull the down to the VM: If I *weren't* doing this on hard mode, I could use Tailscale's [install script](https://tailscale.com/download) like I do on every other Linux system. Hard mode is what I do though, and the installer doesn't directly support Photon OS. I'll instead consult the [manual install instructions](https://tailscale.com/download/linux/static) which tell me to download the appropriate binaries from [https://pkgs.tailscale.com/stable/#static](https://pkgs.tailscale.com/stable/#static). So I'll grab the link for the latest `arm64` build and pull the down to the VM:
```command ```shell
curl https://pkgs.tailscale.com/stable/tailscale_1.22.2_arm64.tgz --output tailscale_arm64.tgz curl https://pkgs.tailscale.com/stable/tailscale_1.22.2_arm64.tgz --output tailscale_arm64.tgz # [tl! .cmd]
``` ```
Then I can unpack it: Then I can unpack it:
```command ```shell
sudo tdnf install tar sudo tdnf install tar # [tl! .cmd:2]
tar xvf tailscale_arm64.tgz tar xvf tailscale_arm64.tgz
cd tailscale_1.22.2_arm64/ cd tailscale_1.22.2_arm64/
``` ```
So I've got the `tailscale` and `tailscaled` binaries as well as some sample service configs in the `systemd` directory: So I've got the `tailscale` and `tailscaled` binaries as well as some sample service configs in the `systemd` directory:
```command-session ```shell
ls ls # [tl! .cmd]
total 32288 total 32288 # [tl! .nocopy:4]
drwxr-x--- 2 john users 4096 Mar 18 02:44 systemd drwxr-x--- 2 john users 4096 Mar 18 02:44 systemd
-rwxr-x--- 1 john users 12187139 Mar 18 02:44 tailscale -rwxr-x--- 1 john users 12187139 Mar 18 02:44 tailscale
-rwxr-x--- 1 john users 20866538 Mar 18 02:44 tailscaled -rwxr-x--- 1 john users 20866538 Mar 18 02:44 tailscaled
```
```command-session ls ./systemd # [tl! .cmd]
ls ./systemd total 8 # [tl! .nocopy:2]
total 8
-rw-r----- 1 john users 287 Mar 18 02:44 tailscaled.defaults -rw-r----- 1 john users 287 Mar 18 02:44 tailscaled.defaults
-rw-r----- 1 john users 674 Mar 18 02:44 tailscaled.service -rw-r----- 1 john users 674 Mar 18 02:44 tailscaled.service
``` ```
Dealing with the binaries is straight-forward. I'll drop them into `/usr/bin/` and `/usr/sbin/` (respectively) and set the file permissions: Dealing with the binaries is straight-forward. I'll drop them into `/usr/bin/` and `/usr/sbin/` (respectively) and set the file permissions:
```command ```shell
sudo install -m 755 tailscale /usr/bin/ sudo install -m 755 tailscale /usr/bin/ # [tl! .cmd:1]
sudo install -m 755 tailscaled /usr/sbin/ sudo install -m 755 tailscaled /usr/sbin/
``` ```
Then I'll descend to the `systemd` folder and see what's up: Then I'll descend to the `systemd` folder and see what's up:
```command ```shell
cd systemd/ cd systemd/ # [tl! .cmd:1]
```
```command-session
cat tailscaled.defaults cat tailscaled.defaults
# Set the port to listen on for incoming VPN packets. # Set the port to listen on for incoming VPN packets. [tl! .nocopy:8]
# Remote nodes will automatically be informed about the new port number, # Remote nodes will automatically be informed about the new port number,
# but you might want to configure this in order to set external firewall # but you might want to configure this in order to set external firewall
# settings. # settings.
@ -341,10 +336,9 @@ PORT="41641"
# Extra flags you might want to pass to tailscaled. # Extra flags you might want to pass to tailscaled.
FLAGS="" FLAGS=""
```
```command-session cat tailscaled.service # [tl! .cmd]
cat tailscaled.service [Unit] # [tl! .nocopy:start]
[Unit]
Description=Tailscale node agent Description=Tailscale node agent
Documentation=https://tailscale.com/kb/ Documentation=https://tailscale.com/kb/
Wants=network-pre.target Wants=network-pre.target
@ -367,28 +361,28 @@ CacheDirectoryMode=0750
Type=notify Type=notify
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target # [tl! .nocopy:end]
``` ```
`tailscaled.defaults` contains the default configuration that will be referenced by the service, and `tailscaled.service` tells me that it expects to find it at `/etc/defaults/tailscaled`. So I'll copy it there and set the perms: `tailscaled.defaults` contains the default configuration that will be referenced by the service, and `tailscaled.service` tells me that it expects to find it at `/etc/defaults/tailscaled`. So I'll copy it there and set the perms:
```command ```shell
sudo install -m 644 tailscaled.defaults /etc/defaults/tailscaled sudo install -m 644 tailscaled.defaults /etc/defaults/tailscaled # [tl! .cmd]
``` ```
`tailscaled.service` will get dropped in `/usr/lib/systemd/system/`: `tailscaled.service` will get dropped in `/usr/lib/systemd/system/`:
```command ```shell
sudo install -m 644 tailscaled.service /usr/lib/systemd/system/ sudo install -m 644 tailscaled.service /usr/lib/systemd/system/ # [tl! .cmd]
``` ```
Then I'll enable the service and start it: Then I'll enable the service and start it:
```command ```shell
sudo systemctl enable tailscaled.service sudo systemctl enable tailscaled.service # [tl! .cmd:1]
sudo systemctl start tailscaled.service sudo systemctl start tailscaled.service
``` ```
And finally log in to Tailscale, including my `tag:home` tag for [ACL purposes](/secure-networking-made-simple-with-tailscale/#acls) and a route advertisement for my home network so that my other Tailscale nodes can use this one to access other devices as well: And finally log in to Tailscale, including my `tag:home` tag for [ACL purposes](/secure-networking-made-simple-with-tailscale/#acls) and a route advertisement for my home network so that my other Tailscale nodes can use this one to access other devices as well:
```command ```shell
sudo tailscale up --advertise-tags "tag:home" --advertise-route "192.168.1.0/24" sudo tailscale up --advertise-tags "tag:home" --advertise-route "192.168.1.0/24" # [tl! .cmd]
``` ```
That will return a URL I can use to authenticate, and I'll then able to to view and manage the new Tailscale node from the `login.tailscale.com` admin portal: That will return a URL I can use to authenticate, and I'll then able to to view and manage the new Tailscale node from the `login.tailscale.com` admin portal:

View file

@ -74,9 +74,9 @@ Success! My new ingress rules appear at the bottom of the list.
![New rules added](s5Y0rycng.png) ![New rules added](s5Y0rycng.png)
That gets traffic from the internet and to my instance, but the OS is still going to drop the traffic at its own firewall. I'll need to work with `iptables` to change that. (You typically use `ufw` to manage firewalls more easily on Ubuntu, but it isn't included on this minimal image and seemed to butt heads with `iptables` when I tried adding it. I eventually decided it was better to just interact with `iptables` directly). I'll start by listing the existing rules on the `INPUT` chain: That gets traffic from the internet and to my instance, but the OS is still going to drop the traffic at its own firewall. I'll need to work with `iptables` to change that. (You typically use `ufw` to manage firewalls more easily on Ubuntu, but it isn't included on this minimal image and seemed to butt heads with `iptables` when I tried adding it. I eventually decided it was better to just interact with `iptables` directly). I'll start by listing the existing rules on the `INPUT` chain:
```command-session ```shell
sudo iptables -L INPUT --line-numbers sudo iptables -L INPUT --line-numbers # [tl! .cmd]
Chain INPUT (policy ACCEPT) Chain INPUT (policy ACCEPT) # [tl! .nocopy:7]
num target prot opt source destination num target prot opt source destination
1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED 1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
2 ACCEPT icmp -- anywhere anywhere 2 ACCEPT icmp -- anywhere anywhere
@ -87,15 +87,15 @@ num target prot opt source destination
``` ```
Note the `REJECT all` statement at line `6`. I'll need to insert my new `ACCEPT` rules for ports `80` and `443` above that implicit deny all: Note the `REJECT all` statement at line `6`. I'll need to insert my new `ACCEPT` rules for ports `80` and `443` above that implicit deny all:
```command ```shell
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT # [tl! .cmd:1]
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT
``` ```
And then I'll confirm that the order is correct: And then I'll confirm that the order is correct:
```command-session ```shell
sudo iptables -L INPUT --line-numbers sudo iptables -L INPUT --line-numbers # [tl! .cmd]
Chain INPUT (policy ACCEPT) Chain INPUT (policy ACCEPT) # [tl! .nocopy:9]
num target prot opt source destination num target prot opt source destination
1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED 1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
2 ACCEPT icmp -- anywhere anywhere 2 ACCEPT icmp -- anywhere anywhere
@ -108,9 +108,9 @@ num target prot opt source destination
``` ```
I can use `nmap` running from my local Linux environment to confirm that I can now reach those ports on the VM. (They're still "closed" since nothing is listening on the ports yet, but the connections aren't being rejected.) I can use `nmap` running from my local Linux environment to confirm that I can now reach those ports on the VM. (They're still "closed" since nothing is listening on the ports yet, but the connections aren't being rejected.)
```command-session ```shell
nmap -Pn matrix.bowdre.net nmap -Pn matrix.bowdre.net # [tl! .cmd]
Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 12:49 CDT Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 12:49 CDT # [tl! .nocopy:10]
Nmap scan report for matrix.bowdre.net(150.136.6.180) Nmap scan report for matrix.bowdre.net(150.136.6.180)
Host is up (0.086s latency). Host is up (0.086s latency).
Other addresses for matrix.bowdre.net (not scanned): 2607:7700:0:1d:0:1:9688:6b4 Other addresses for matrix.bowdre.net (not scanned): 2607:7700:0:1d:0:1:9688:6b4
@ -125,16 +125,16 @@ Nmap done: 1 IP address (1 host up) scanned in 8.44 seconds
Cool! Before I move on, I'll be sure to make the rules persistent so they'll be re-applied whenever `iptables` starts up: Cool! Before I move on, I'll be sure to make the rules persistent so they'll be re-applied whenever `iptables` starts up:
Make rules persistent: ```shell
```command-session sudo netfilter-persistent save # [tl! .cmd]
sudo netfilter-persistent save run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save # [tl! .nocopy:1]
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save
``` ```
### Reverse proxy setup ### Reverse proxy setup
I had initially planned on using `certbot` to generate Let's Encrypt certificates, and then reference the certs as needed from an `nginx` or Apache reverse proxy configuration. While researching how the [proxy would need to be configured to front Synapse](https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md), I found this sample `nginx` configuration: I had initially planned on using `certbot` to generate Let's Encrypt certificates, and then reference the certs as needed from an `nginx` or Apache reverse proxy configuration. While researching how the [proxy would need to be configured to front Synapse](https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md), I found this sample `nginx` configuration:
```nginx {linenos=true} ```text
# torchlight! {"lineNumbers": true}
server { server {
listen 443 ssl http2; listen 443 ssl http2;
listen [::]:443 ssl http2; listen [::]:443 ssl http2;
@ -159,7 +159,8 @@ server {
``` ```
And this sample Apache one: And this sample Apache one:
```apache {linenos=true} ```text
# torchlight! {"lineNumbers": true}
<VirtualHost *:443> <VirtualHost *:443>
SSLEngine on SSLEngine on
ServerName matrix.example.com ServerName matrix.example.com
@ -185,7 +186,8 @@ And this sample Apache one:
``` ```
I also found this sample config for another web server called [Caddy](https://caddyserver.com): I also found this sample config for another web server called [Caddy](https://caddyserver.com):
```caddy {linenos=true} ```text
# torchlight! {"lineNumbers": true}
matrix.example.com { matrix.example.com {
reverse_proxy /_matrix/* http://localhost:8008 reverse_proxy /_matrix/* http://localhost:8008
reverse_proxy /_synapse/client/* http://localhost:8008 reverse_proxy /_synapse/client/* http://localhost:8008
@ -198,8 +200,8 @@ example.com:8448 {
One of these looks much simpler than the other two. I'd never heard of Caddy so I did some quick digging, and I found that it would actually [handle the certificates entirely automatically](https://caddyserver.com/docs/automatic-https) - in addition to having a much easier config. [Installing Caddy](https://caddyserver.com/docs/install#debian-ubuntu-raspbian) wasn't too bad, either: One of these looks much simpler than the other two. I'd never heard of Caddy so I did some quick digging, and I found that it would actually [handle the certificates entirely automatically](https://caddyserver.com/docs/automatic-https) - in addition to having a much easier config. [Installing Caddy](https://caddyserver.com/docs/install#debian-ubuntu-raspbian) wasn't too bad, either:
```command ```shell
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https # [tl! .cmd:4]
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo apt-key add - curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo apt-key add -
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update sudo apt update
@ -207,7 +209,8 @@ sudo apt install caddy
``` ```
Then I just need to put my configuration into the default `Caddyfile`, including the required `.well-known` delegation piece from earlier. Then I just need to put my configuration into the default `Caddyfile`, including the required `.well-known` delegation piece from earlier.
```caddy {linenos=true} ```text
# torchlight! {"lineNumbers": true}
# /etc/caddy/Caddyfile # /etc/caddy/Caddyfile
matrix.bowdre.net { matrix.bowdre.net {
reverse_proxy /_matrix/* http://localhost:8008 reverse_proxy /_matrix/* http://localhost:8008
@ -228,16 +231,16 @@ I set up the `bowdre.net` section to return the appropriate JSON string to tell
(I wouldn't need that section at all if I were using a separate web server for `bowdre.net`; instead, I'd basically just add that `respond /.well-known/matrix/server` line to that other server's config.) (I wouldn't need that section at all if I were using a separate web server for `bowdre.net`; instead, I'd basically just add that `respond /.well-known/matrix/server` line to that other server's config.)
Now to enable the `caddy` service, start it, and restart it so that it loads the new config: Now to enable the `caddy` service, start it, and restart it so that it loads the new config:
```command ```shell
sudo systemctl enable caddy sudo systemctl enable caddy # [tl! .cmd:2]
sudo systemctl start caddy sudo systemctl start caddy
sudo systemctl restart caddy sudo systemctl restart caddy
``` ```
If I repeat my `nmap` scan from earlier, I'll see that the HTTP and HTTPS ports are now open. The server still isn't actually serving anything on those ports yet, but at least it's listening. If I repeat my `nmap` scan from earlier, I'll see that the HTTP and HTTPS ports are now open. The server still isn't actually serving anything on those ports yet, but at least it's listening.
```command-session ```shell
nmap -Pn matrix.bowdre.net nmap -Pn matrix.bowdre.net # [tl! .cmd]
Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 13:44 CDT Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 13:44 CDT # [tl! .nocopy:9]
Nmap scan report for matrix.bowdre.net (150.136.6.180) Nmap scan report for matrix.bowdre.net (150.136.6.180)
Host is up (0.034s latency). Host is up (0.034s latency).
Not shown: 997 filtered ports Not shown: 997 filtered ports
@ -265,57 +268,56 @@ Okay, let's actually serve something up now.
#### Docker setup #### Docker setup
Before I can get on with [deploying Synapse in Docker](https://hub.docker.com/r/matrixdotorg/synapse), I first need to [install Docker](https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository) on the system: Before I can get on with [deploying Synapse in Docker](https://hub.docker.com/r/matrixdotorg/synapse), I first need to [install Docker](https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository) on the system:
```command-session ```shell
sudo apt-get install \ sudo apt-get install \ # [tl! .cmd]
apt-transport-https \ apt-transport-https \
ca-certificates \ ca-certificates \
curl \ curl \
gnupg \ gnupg \
lsb-release lsb-release
```
```command curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ # [tl! .cmd]
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
```
```command-session echo \ # [tl! .cmd]
echo \ "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
``` sudo apt update # [tl! .cmd:1]
```command
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io sudo apt install docker-ce docker-ce-cli containerd.io
``` ```
I'll also [install Docker Compose](https://docs.docker.com/compose/install/#install-compose): I'll also [install Docker Compose](https://docs.docker.com/compose/install/#install-compose):
```command ```shell
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" \ # [tl! .cmd]
sudo chmod +x /usr/local/bin/docker-compose -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose # [tl! .cmd]
``` ```
And I'll add my `ubuntu` user to the `docker` group so that I won't have to run every docker command with `sudo`: And I'll add my `ubuntu` user to the `docker` group so that I won't have to run every docker command with `sudo`:
```command ```shell
sudo usermod -G docker -a ubuntu sudo usermod -G docker -a ubuntu # [tl! .cmd]
``` ```
I'll log out and back in so that the membership change takes effect, and then test both `docker` and `docker-compose` to make sure they're working: I'll log out and back in so that the membership change takes effect, and then test both `docker` and `docker-compose` to make sure they're working:
```command-session ```shell
docker --version docker --version # [tl! .cmd]
Docker version 20.10.7, build f0df350 Docker version 20.10.7, build f0df350 # [tl! .nocopy:1]
```
```command-session docker-compose --version # [tl! .cmd]
docker-compose --version docker-compose version 1.29.2, build 5becea4c # [tl! .nocopy]
docker-compose version 1.29.2, build 5becea4c
``` ```
#### Synapse setup #### Synapse setup
Now I'll make a place for the Synapse installation to live, including a `data` folder that will be mounted into the container: Now I'll make a place for the Synapse installation to live, including a `data` folder that will be mounted into the container:
```command ```shell
sudo mkdir -p /opt/matrix/synapse/data sudo mkdir -p /opt/matrix/synapse/data # [tl! .cmd:1]
cd /opt/matrix/synapse cd /opt/matrix/synapse
``` ```
And then I'll create the compose file to define the deployment: And then I'll create the compose file to define the deployment:
```yaml {linenos=true} ```yaml
# torchlight! {"lineNumbers": true}
# /opt/matrix/synapse/docker-compose.yaml # /opt/matrix/synapse/docker-compose.yaml
services: services:
synapse: synapse:
@ -330,13 +332,13 @@ services:
Before I can fire this up, I'll need to generate an initial configuration as [described in the documentation](https://hub.docker.com/r/matrixdotorg/synapse). Here I'll specify the server name that I'd like other Matrix servers to know mine by (`bowdre.net`): Before I can fire this up, I'll need to generate an initial configuration as [described in the documentation](https://hub.docker.com/r/matrixdotorg/synapse). Here I'll specify the server name that I'd like other Matrix servers to know mine by (`bowdre.net`):
```command-session ```shell
docker run -it --rm \ docker run -it --rm \ # [tl! .cmd]
-v "/opt/matrix/synapse/data:/data" \ -v "/opt/matrix/synapse/data:/data" \
-e SYNAPSE_SERVER_NAME=bowdre.net \ -e SYNAPSE_SERVER_NAME=bowdre.net \
-e SYNAPSE_REPORT_STATS=yes \ -e SYNAPSE_REPORT_STATS=yes \
matrixdotorg/synapse generate matrixdotorg/synapse generate
# [tl! .nocopy:start]
Unable to find image 'matrixdotorg/synapse:latest' locally Unable to find image 'matrixdotorg/synapse:latest' locally
latest: Pulling from matrixdotorg/synapse latest: Pulling from matrixdotorg/synapse
69692152171a: Pull complete 69692152171a: Pull complete
@ -353,7 +355,7 @@ Status: Downloaded newer image for matrixdotorg/synapse:latest
Creating log config /data/bowdre.net.log.config Creating log config /data/bowdre.net.log.config
Generating config file /data/homeserver.yaml Generating config file /data/homeserver.yaml
Generating signing key file /data/bowdre.net.signing.key Generating signing key file /data/bowdre.net.signing.key
A config file has been generated in '/data/homeserver.yaml' for server name 'bowdre.net'. Please review this file and customise it to your needs. A config file has been generated in '/data/homeserver.yaml' for server name 'bowdre.net'. Please review this file and customise it to your needs. # [tl! .nocopy:end]
``` ```
As instructed, I'll use `sudo vi data/homeserver.yaml` to review/modify the generated config. I'll leave As instructed, I'll use `sudo vi data/homeserver.yaml` to review/modify the generated config. I'll leave
@ -375,16 +377,16 @@ so that I can create a user account without fumbling with the CLI. I'll be sure
There are a bunch of other useful configurations that can be made here, but these will do to get things going for now. There are a bunch of other useful configurations that can be made here, but these will do to get things going for now.
Time to start it up: Time to start it up:
```command-session ```shell
docker-compose up -d docker-compose up -d # [tl! .cmd]
Creating network "synapse_default" with the default driver Creating network "synapse_default" with the default driver # [tl! .nocopy:1]
Creating synapse ... done Creating synapse ... done
``` ```
And use `docker ps` to confirm that it's running: And use `docker ps` to confirm that it's running:
```command-session ```shell
docker ps docker ps # [tl! .cmd]
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES # [tl! .nocopy:1]
573612ec5735 matrixdotorg/synapse "/start.py" 25 seconds ago Up 23 seconds (healthy) 8009/tcp, 127.0.0.1:8008->8008/tcp, 8448/tcp synapse 573612ec5735 matrixdotorg/synapse "/start.py" 25 seconds ago Up 23 seconds (healthy) 8009/tcp, 127.0.0.1:8008->8008/tcp, 8448/tcp synapse
``` ```
@ -417,21 +419,21 @@ All in, I'm pretty pleased with how this little project turned out, and I learne
### Update: Updating ### Update: Updating
After a while, it's probably a good idea to update both the Ubntu server and the Synapse container running on it. Updating the server itself is as easy as: After a while, it's probably a good idea to update both the Ubntu server and the Synapse container running on it. Updating the server itself is as easy as:
```command ```shell
sudo apt update sudo apt update # [tl! .cmd:1]
sudo apt upgrade sudo apt upgrade
``` ```
Here's what I do to update the container: Here's what I do to update the container:
```bash ```shell
# Move to the working directory # Move to the working directory # [tl! .nocopy]
cd /opt/matrix/synapse cd /opt/matrix/synapse # [tl! .cmd]
# Pull a new version of the synapse image # Pull a new version of the synapse image # [tl! .nocopy]
docker-compose pull docker-compose pull # [tl! .cmd]
# Stop the container # Stop the container # [tl! .nocopy]
docker-compose down docker-compose down # [tl! .cmd]
# Start it back up without the old version # Start it back up without the old version # [tl! .nocopy]
docker-compose up -d --remove-orphans docker-compose up -d --remove-orphans # [tl! .cmd]
# Periodically remove the old docker images # Periodically remove the old docker images # [tl! .nocopy]
docker image prune docker image prune # [tl! .cmd]
``` ```

View file

@ -14,40 +14,41 @@ I found myself with a sudden need for parsing a Linux server's logs to figure ou
### Find IP-ish strings ### Find IP-ish strings
This will get you all occurrences of things which look vaguely like IPv4 addresses: This will get you all occurrences of things which look vaguely like IPv4 addresses:
```command ```shell
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT # [tl! .cmd]
``` ```
(It's not a perfect IP address regex since it would match things like `987.654.321.555` but it's close enough for my needs.) (It's not a perfect IP address regex since it would match things like `987.654.321.555` but it's close enough for my needs.)
### Filter out `localhost` ### Filter out `localhost`
The log likely include a LOT of traffic to/from `127.0.0.1` so let's toss out `localhost` by piping through `grep -v "127.0.0.1"` (`-v` will do an inverse match - only return results which *don't* match the given expression): The log likely include a LOT of traffic to/from `127.0.0.1` so let's toss out `localhost` by piping through `grep -v "127.0.0.1"` (`-v` will do an inverse match - only return results which *don't* match the given expression):
```command ```shell
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" # [tl! .cmd]
``` ```
### Count up the duplicates ### Count up the duplicates
Now we need to know how many times each IP shows up in the log. We can do that by passing the output through `uniq -c` (`uniq` will filter for unique entries, and the `-c` flag will return a count of how many times each result appears): Now we need to know how many times each IP shows up in the log. We can do that by passing the output through `uniq -c` (`uniq` will filter for unique entries, and the `-c` flag will return a count of how many times each result appears):
```command ```shell
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c # [tl! .cmd]
``` ```
### Sort the results ### Sort the results
We can use `sort` to sort the results. `-n` tells it sort based on numeric rather than character values, and `-r` reverses the list so that the larger numbers appear at the top: We can use `sort` to sort the results. `-n` tells it sort based on numeric rather than character values, and `-r` reverses the list so that the larger numbers appear at the top:
```command ```shell
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r # [tl! .cmd]
``` ```
### Top 5 ### Top 5
And, finally, let's use `head -n 5` to only get the first five results: And, finally, let's use `head -n 5` to only get the first five results:
```command ```shell
grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5 grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5 # [tl! .cmd]
``` ```
### Bonus round! ### Bonus round!
You know how old log files get rotated and compressed into files like `logname.1.gz`? I *very* recently learned that there are versions of the standard Linux text manipulation tools which can work directly on compressed log files, without having to first extract the files. I'd been doing things the hard way for years - no longer, now that I know about `zcat`, `zdiff`, `zgrep`, and `zless`! You know how old log files get rotated and compressed into files like `logname.1.gz`? I *very* recently learned that there are versions of the standard Linux text manipulation tools which can work directly on compressed log files, without having to first extract the files. I'd been doing things the hard way for years - no longer, now that I know about `zcat`, `zdiff`, `zgrep`, and `zless`!
So let's use a `for` loop to iterate through 20 of those compressed logs, and use `date -r [filename]` to get the timestamp for each log as we go: So let's use a `for` loop to iterate through 20 of those compressed logs, and use `date -r [filename]` to get the timestamp for each log as we go:
```command ```shell
for i in {1..20}; do date -r ACCESS_LOG.$i.gz; zgrep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' \ACCESS_LOG.log.$i.gz | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5; done for i in {1..20}; do date -r ACCESS_LOG.$i.gz; zgrep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' \ # [tl! .cmd]
ACCESS_LOG.log.$i.gz | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5; done
``` ```
Nice! Nice!

View file

@ -39,9 +39,9 @@ ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verif
``` ```
Further, attempting to pull down that URL with `curl` also failed: Further, attempting to pull down that URL with `curl` also failed:
```commandroot-session ```shell
curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery # [tl! .cmd]
curl: (60) SSL certificate problem: self signed certificate in certificate chain curl: (60) SSL certificate problem: self signed certificate in certificate chain # [tl! .nocopy:5]
More details here: https://curl.se/docs/sslcerts.html More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not curl failed to verify the legitimacy of the server and therefore could not
@ -61,20 +61,21 @@ So here's what I did to get things working in my homelab:
![Exporting the self-signed CA cert](20211105_export_selfsigned_ca.png) ![Exporting the self-signed CA cert](20211105_export_selfsigned_ca.png)
2. Open the file in a text editor, and copy the contents into a new file on the SSC appliance. I used `~/vra.crt`. 2. Open the file in a text editor, and copy the contents into a new file on the SSC appliance. I used `~/vra.crt`.
3. Append the certificate to the end of the system `ca-bundle.crt`: 3. Append the certificate to the end of the system `ca-bundle.crt`:
```commandroot ```shell
cat <vra.crt >> /etc/pki/tls/certs/ca-bundle.crt cat <vra.crt >> /etc/pki/tls/certs/ca-bundle.crt # [tl! .cmd]
``` ```
4. Test that I can now `curl` from vRA without a certificate error: 4. Test that I can now `curl` from vRA without a certificate error:
```commandroot-session ```curl
curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery # [tl! .cmd]
{"timestamp":1636139143260,"type":"CLIENT_ERROR","status":"400 BAD_REQUEST","error":"Bad Request","serverMessage":"400 BAD_REQUEST \"Required String parameter 'state' is not present\""} {"timestamp":1636139143260,"type":"CLIENT_ERROR","status":"400 BAD_REQUEST","error":"Bad Request","serverMessage":"400 BAD_REQUEST \"Required String parameter 'state' is not present\""} # [tl! .nocopy]
``` ```
5. Edit `/usr/lib/systemd/system/raas.service` to update the service definition so it will look to the `ca-bundle.crt` file by adding 5. Edit `/usr/lib/systemd/system/raas.service` to update the service definition so it will look to the `ca-bundle.crt` file by adding
```cfg ```ini
Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
``` ```
above the `ExecStart` line: above the `ExecStart` line:
```cfg {linenos=true,hl_lines=16} ```ini
# torchlight! {"lineNumbers": true}
# /usr/lib/systemd/system/raas.service # /usr/lib/systemd/system/raas.service
[Unit] [Unit]
Description=The SaltStack Enterprise API Server Description=The SaltStack Enterprise API Server
@ -90,15 +91,15 @@ RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX AF_NETLINK
PermissionsStartOnly=true PermissionsStartOnly=true
ExecStartPre=/bin/sh -c 'systemctl set-environment FIPS_MODE=$(/opt/vmware/bin/ovfenv -q --key fips-mode)' ExecStartPre=/bin/sh -c 'systemctl set-environment FIPS_MODE=$(/opt/vmware/bin/ovfenv -q --key fips-mode)'
ExecStartPre=/bin/sh -c 'systemctl set-environment NODE_TYPE=$(/opt/vmware/bin/ovfenv -q --key node-type)' ExecStartPre=/bin/sh -c 'systemctl set-environment NODE_TYPE=$(/opt/vmware/bin/ovfenv -q --key node-type)'
Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt # [tl! focus]
ExecStart=/usr/bin/raas ExecStart=/usr/bin/raas
TimeoutStopSec=90 TimeoutStopSec=90
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
``` ```
6. Stop and restart the `raas` service: 6. Stop and restart the `raas` service:
```command ```shell
systemctl daemon-reload systemctl daemon-reload # [tl! .cmd:2]
systemctl stop raas systemctl stop raas
systemctl start raas systemctl start raas
``` ```
@ -110,8 +111,8 @@ systemctl start raas
The steps for doing this at work with an enterprise CA were pretty similar, with just slightly-different steps 1 and 2: The steps for doing this at work with an enterprise CA were pretty similar, with just slightly-different steps 1 and 2:
1. Access the enterprise CA and download the CA chain, which came in `.p7b` format. 1. Access the enterprise CA and download the CA chain, which came in `.p7b` format.
2. Use `openssl` to extract the individual certificates: 2. Use `openssl` to extract the individual certificates:
```command ```shell
openssl pkcs7 -inform PEM -outform PEM -in enterprise-ca-chain.p7b -print_certs > enterprise-ca-chain.pem openssl pkcs7 -inform PEM -outform PEM -in enterprise-ca-chain.p7b -print_certs > enterprise-ca-chain.pem # [tl! .cmd]
``` ```
Copy it to the SSC appliance, and then pick up with Step 3 above. Copy it to the SSC appliance, and then pick up with Step 3 above.

View file

@ -44,8 +44,8 @@ After hitting **Execute**, the Swagger UI will populate the *Responses* section
![curl request format](login_controller_3.png) ![curl request format](login_controller_3.png)
So I could easily replicate this using the `curl` utility by just copying and pasting the following into a shell: So I could easily replicate this using the `curl` utility by just copying and pasting the following into a shell:
```command-session ```curl
curl -X 'POST' \ curl -X 'POST' \ # [tl! .cmd]
'https://vra.lab.bowdre.net/csp/gateway/am/api/login' \ 'https://vra.lab.bowdre.net/csp/gateway/am/api/login' \
-H 'accept: */*' \ -H 'accept: */*' \
-H 'Content-Type: application/json' \ -H 'Content-Type: application/json' \
@ -69,31 +69,32 @@ Now I can go find an IaaS API that I'm interested in querying (like `/iaas/api/f
![Using Swagger to query for flavor mappings](flavor_mappings_swagger_request.png) ![Using Swagger to query for flavor mappings](flavor_mappings_swagger_request.png)
And here's the result: And here's the result:
```json {hl_lines=[6,10,14,44,48,52,56,60,64]} ```json
// torchlight! {"lineNumbers": true}
{ {
"content": [ "content": [
{ {
"flavorMappings": { "flavorMappings": {
"mapping": { "mapping": {
"1vCPU | 2GB [tiny]": { "1vCPU | 2GB [tiny]": { // [tl! focus]
"cpuCount": 1, "cpuCount": 1,
"memoryInMB": 2048 "memoryInMB": 2048
}, },
"1vCPU | 1GB [micro]": { "1vCPU | 1GB [micro]": { // [tl! focus]
"cpuCount": 1, "cpuCount": 1,
"memoryInMB": 1024 "memoryInMB": 1024
}, },
"2vCPU | 4GB [small]": { "2vCPU | 4GB [small]": { // [tl! focus]
"cpuCount": 2, "cpuCount": 2,
"memoryInMB": 4096 "memoryInMB": 4096
} }
}, }, // [tl! collapse:5]
"_links": { "_links": {
"region": { "region": {
"href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f" "href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f"
} }
} }
}, },// [tl! collapse:start]
"externalRegionId": "Datacenter:datacenter-39056", "externalRegionId": "Datacenter:datacenter-39056",
"cloudAccountId": "75d29635-f128-4b85-8cf9-95a9e5981c68", "cloudAccountId": "75d29635-f128-4b85-8cf9-95a9e5981c68",
"name": "", "name": "",
@ -107,43 +108,43 @@ And here's the result:
}, },
"region": { "region": {
"href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f" "href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f"
} } // [tl! collapse:end]
} }
}, },
{ {
"flavorMappings": { "flavorMappings": {
"mapping": { "mapping": {
"2vCPU | 8GB [medium]": { "2vCPU | 8GB [medium]": { // [tl! focus]
"cpuCount": 2, "cpuCount": 2,
"memoryInMB": 8192 "memoryInMB": 8192
}, },
"1vCPU | 2GB [tiny]": { "1vCPU | 2GB [tiny]": { // [tl! focus]
"cpuCount": 1, "cpuCount": 1,
"memoryInMB": 2048 "memoryInMB": 2048
}, },
"8vCPU | 16GB [giant]": { "8vCPU | 16GB [giant]": { // [tl! focus]
"cpuCount": 8, "cpuCount": 8,
"memoryInMB": 16384 "memoryInMB": 16384
}, },
"1vCPU | 1GB [micro]": { "1vCPU | 1GB [micro]": { // [tl! focus]
"cpuCount": 1, "cpuCount": 1,
"memoryInMB": 1024 "memoryInMB": 1024
}, },
"2vCPU | 4GB [small]": { "2vCPU | 4GB [small]": { // [tl! focus]
"cpuCount": 2, "cpuCount": 2,
"memoryInMB": 4096 "memoryInMB": 4096
}, },
"4vCPU | 12GB [large]": { "4vCPU | 12GB [large]": { // [tl! focus]
"cpuCount": 4, "cpuCount": 4,
"memoryInMB": 12288 "memoryInMB": 12288
} }
}, }, // [tl! collapse:5]
"_links": { "_links": {
"region": { "region": {
"href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" "href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136"
} }
} }
}, }, // [tl! collapse:start]
"externalRegionId": "Datacenter:datacenter-1001", "externalRegionId": "Datacenter:datacenter-1001",
"cloudAccountId": "75d29635-f128-4b85-8cf9-95a9e5981c68", "cloudAccountId": "75d29635-f128-4b85-8cf9-95a9e5981c68",
"name": "", "name": "",
@ -158,7 +159,7 @@ And here's the result:
"region": { "region": {
"href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" "href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136"
} }
} } // [tl! collapse:end]
} }
], ],
"totalElements": 2, "totalElements": 2,
@ -175,61 +176,62 @@ As you can see, Swagger can really help to jump-start the exploration of a new A
[HTTPie](https://httpie.io/) is a handy command-line utility optimized for interacting with web APIs. This will make things easier as I dig deeper. [HTTPie](https://httpie.io/) is a handy command-line utility optimized for interacting with web APIs. This will make things easier as I dig deeper.
Installing the [Debian package](https://httpie.io/docs/cli/debian-and-ubuntu) is a piece of ~~cake~~ _pie_[^pie]: Installing the [Debian package](https://httpie.io/docs/cli/debian-and-ubuntu) is a piece of ~~cake~~ _pie_[^pie]:
```command ```shell
curl -SsL https://packages.httpie.io/deb/KEY.gpg | sudo apt-key add - curl -SsL https://packages.httpie.io/deb/KEY.gpg | sudo apt-key add - # [tl! .cmd:3]
sudo curl -SsL -o /etc/apt/sources.list.d/httpie.list https://packages.httpie.io/deb/httpie.list sudo curl -SsL -o /etc/apt/sources.list.d/httpie.list https://packages.httpie.io/deb/httpie.list
sudo apt update sudo apt update
sudo apt install httpie sudo apt install httpie
``` ```
Once installed, running `http` will give me a quick overview of how to use this new tool: Once installed, running `http` will give me a quick overview of how to use this new tool:
```command-session ```shell
http http # [tl! .cmd]
usage: usage: # [tl! .nocopy:start]
http [METHOD] URL [REQUEST_ITEM ...] http [METHOD] URL [REQUEST_ITEM ...]
error: error:
the following arguments are required: URL the following arguments are required: URL
for more information: for more information:
run 'http --help' or visit https://httpie.io/docs/cli run 'http --help' or visit https://httpie.io/docs/cli # [tl! .nocopy:end]
``` ```
HTTPie cleverly interprets anything passed after the URL as a [request item](https://httpie.io/docs/cli/request-items), and it determines the item type based on a simple key/value syntax: HTTPie cleverly interprets anything passed after the URL as a [request item](https://httpie.io/docs/cli/request-items), and it determines the item type based on a simple key/value syntax:
> Each request item is simply a key/value pair separated with the following characters: `:` (headers), `=` (data field, e.g., JSON, form), `:=` (raw data field), `==` (query parameters), `@` (file upload). > Each request item is simply a key/value pair separated with the following characters: `:` (headers), `=` (data field, e.g., JSON, form), `:=` (raw data field), `==` (query parameters), `@` (file upload).
So my earlier request for an authentication token becomes: So my earlier request for an authentication token becomes:
```command ```shell
https POST vra.lab.bowdre.net/csp/gateway/am/api/login username='vra' password='********' domain='lab.bowdre.net' https POST vra.lab.bowdre.net/csp/gateway/am/api/login username='vra' password='********' domain='lab.bowdre.net' # [tl! .cmd]
``` ```
{{% notice tip "Working with Self-Signed Certificates" %}} {{% notice tip "Working with Self-Signed Certificates" %}}
If your vRA endpoint is using a self-signed or otherwise untrusted certificate, pass the HTTPie option `--verify=no` to ignore certificate errors: If your vRA endpoint is using a self-signed or otherwise untrusted certificate, pass the HTTPie option `--verify=no` to ignore certificate errors:
```command ```shell
https --verify=no POST [URL] [REQUEST_ITEMS] https --verify=no POST [URL] [REQUEST_ITEMS] # [tl! .cmd]
``` ```
{{% /notice %}} {{% /notice %}}
Running that will return a bunch of interesting headers but I'm mainly interested in the response body: Running that will return a bunch of interesting headers but I'm mainly interested in the response body:
```json ```json
{ {
"cspAuthToken": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlhLFNUPVNvZmlhLEM9QkciLCJpYXQiOjE2NTQwMjQw[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-ErjC6F3c2mV1qIqES2oZbEpjxar16ZVSPshIaOoWRXe5uZB21tkuwVMgZuuwgmpliG_JBa1Y6Oh0FZBbI7o0ERro9qOW-s2npz4Csv5FwcXt0fa4esbXXIKINjqZMh9NDDb23bUabSag" "cspAuthToken": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlh[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-npz4Csv5FwcXt0fa"
} }
``` ```
There's the auth token[^token] that I'll need for subsequent requests. I'll store that in a variable so that it's easier to wield: There's the auth token[^token] that I'll need for subsequent requests. I'll store that in a variable so that it's easier to wield:
```command ```shell
token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlhLFNUPVNvZmlhLEM9QkciLCJpYXQiOjE2NTQwMjQw[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-ErjC6F3c2mV1qIqES2oZbEpjxar16ZVSPshIaOoWRXe5uZB21tkuwVMgZuuwgmpliG_JBa1Y6Oh0FZBbI7o0ERro9qOW-s2npz4Csv5FwcXt0fa4esbXXIKINjqZMh9NDDb23bUabSag token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlh[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-npz4Csv5FwcXt0fa # [tl! .cmd]
``` ```
So now if I want to find out which images have been configured in vRA, I can ask: So now if I want to find out which images have been configured in vRA, I can ask:
```command ```shell
https GET vra.lab.bowdre.net/iaas/api/images "Authorization: Bearer $token" https GET vra.lab.bowdre.net/iaas/api/images "Authorization: Bearer $token" # [tl! .cmd]
``` ```
{{% notice note "Request Items" %}} {{% notice note "Request Items" %}}
Remember from above that HTTPie will automatically insert key/value pairs separated by a colon into the request header. Remember from above that HTTPie will automatically insert key/value pairs separated by a colon into the request header.
{{% /notice %}} {{% /notice %}}
And I'll get back some headers followed by an JSON object detailing the defined image mappings broken up by region: And I'll get back some headers followed by an JSON object detailing the defined image mappings broken up by region:
```json {linenos=true,hl_lines=[11,14,37,40,53,56]} ```json
// torchlight! {"lineNumbers": true}
{ {
"content": [ "content": [
{ {
@ -240,10 +242,10 @@ And I'll get back some headers followed by an JSON object detailing the defined
}, },
"externalRegionId": "Datacenter:datacenter-39056", "externalRegionId": "Datacenter:datacenter-39056",
"mapping": { "mapping": {
"Photon 4": { "Photon 4": { // [tl! focus]
"_links": { "_links": {
"region": { "region": {
"href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f" "href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f" // [tl! focus]
} }
}, },
"cloudConfig": "", "cloudConfig": "",
@ -266,10 +268,10 @@ And I'll get back some headers followed by an JSON object detailing the defined
}, },
"externalRegionId": "Datacenter:datacenter-1001", "externalRegionId": "Datacenter:datacenter-1001",
"mapping": { "mapping": {
"Photon 4": { "Photon 4": { // [tl! focus]
"_links": { "_links": {
"region": { "region": {
"href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" "href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" // [tl! focus]
} }
}, },
"cloudConfig": "", "cloudConfig": "",
@ -282,10 +284,10 @@ And I'll get back some headers followed by an JSON object detailing the defined
"name": "photon", "name": "photon",
"osFamily": "LINUX" "osFamily": "LINUX"
}, },
"Windows Server 2019": { "Windows Server 2019": { // [tl! focus]
"_links": { "_links": {
"region": { "region": {
"href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" "href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" // [tl! focus]
} }
}, },
"cloudConfig": "", "cloudConfig": "",
@ -376,7 +378,8 @@ I'll head into **Library > Actions** to create a new action inside my `com.virtu
| `configurationName` | `string` | Name of Configuration | | `configurationName` | `string` | Name of Configuration |
| `variableName` | `string` | Name of desired variable inside Configuration | | `variableName` | `string` | Name of desired variable inside Configuration |
```javascript {linenos=true} ```javascript
// torchlight! {"lineNumbers": true}
/* /*
JavaScript: getConfigValue action JavaScript: getConfigValue action
Inputs: path (string), configurationName (string), variableName (string) Inputs: path (string), configurationName (string), variableName (string)
@ -396,7 +399,8 @@ Next, I'll create another action in my `com.virtuallypotato.utility` module whic
![vraLogin action](vraLogin_action.png) ![vraLogin action](vraLogin_action.png)
```javascript {linenos=true} ```javascript
// torchlight! {"lineNumbers": true}
/* /*
JavaScript: vraLogin action JavaScript: vraLogin action
Inputs: none Inputs: none
@ -428,7 +432,8 @@ I like to clean up after myself so I'm also going to create a `vraLogout` action
|:--- |:--- |:--- | |:--- |:--- |:--- |
| `token` | `string` | Auth token of the session to destroy | | `token` | `string` | Auth token of the session to destroy |
```javascript {linenos=true} ```javascript
// torchlight! {"lineNumbers": true}
/* /*
JavaScript: vraLogout action JavaScript: vraLogout action
Inputs: token (string) Inputs: token (string)
@ -458,7 +463,8 @@ My final "utility" action for this effort will run in between `vraLogin` and `vr
|`uri`|`string`|Path to API controller (`/iaas/api/flavor-profiles`)| |`uri`|`string`|Path to API controller (`/iaas/api/flavor-profiles`)|
|`content`|`string`|Any additional data to pass with the request| |`content`|`string`|Any additional data to pass with the request|
```javascript {linenos=true} ```javascript
// torchlight! {"lineNumbers": true}
/* /*
JavaScript: vraExecute action JavaScript: vraExecute action
Inputs: token (string), method (string), uri (string), content (string) Inputs: token (string), method (string), uri (string), content (string)
@ -496,7 +502,8 @@ This action will:
Other actions wanting to interact with the vRA REST API will follow the same basic formula, though with some more logic and capability baked in. Other actions wanting to interact with the vRA REST API will follow the same basic formula, though with some more logic and capability baked in.
Anyway, here's my first swing: Anyway, here's my first swing:
```JavaScript {linenos=true} ```javascript
// torchlight! {"lineNumbers": true}
/* /*
JavaScript: vraTester action JavaScript: vraTester action
Inputs: none Inputs: none
@ -513,7 +520,8 @@ Pretty simple, right? Let's see if it works:
![vraTester action](vraTester_action.png) ![vraTester action](vraTester_action.png)
It did! Though that result is a bit hard to parse visually, so I'm going to prettify it a bit: It did! Though that result is a bit hard to parse visually, so I'm going to prettify it a bit:
```json {linenos=true,hl_lines=[17,35,56,74]} ```json
// torchlight! {"lineNumbers": true}
[ [
{ {
"tags": [], "tags": [],
@ -530,7 +538,7 @@ It did! Though that result is a bit hard to parse visually, so I'm going to pret
"folder": "vRA_Deploy", "folder": "vRA_Deploy",
"externalRegionId": "Datacenter:datacenter-1001", "externalRegionId": "Datacenter:datacenter-1001",
"cloudAccountId": "75d29635-f128-4b85-8cf9-95a9e5981c68", "cloudAccountId": "75d29635-f128-4b85-8cf9-95a9e5981c68",
"name": "NUC", "name": "NUC", // [tl! focus]
"id": "3d4f048a-385d-4759-8c04-117a170d060c", "id": "3d4f048a-385d-4759-8c04-117a170d060c",
"updatedAt": "2022-06-02", "updatedAt": "2022-06-02",
"organizationId": "61ebe5bf-5f55-4dee-8533-7ad05c067dd9", "organizationId": "61ebe5bf-5f55-4dee-8533-7ad05c067dd9",
@ -548,7 +556,7 @@ It did! Though that result is a bit hard to parse visually, so I'm going to pret
"href": "/iaas/api/zones/3d4f048a-385d-4759-8c04-117a170d060c" "href": "/iaas/api/zones/3d4f048a-385d-4759-8c04-117a170d060c"
}, },
"region": { "region": {
"href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" "href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" // [tl! focus]
}, },
"cloud-account": { "cloud-account": {
"href": "/iaas/api/cloud-accounts/75d29635-f128-4b85-8cf9-95a9e5981c68" "href": "/iaas/api/cloud-accounts/75d29635-f128-4b85-8cf9-95a9e5981c68"
@ -569,7 +577,7 @@ It did! Though that result is a bit hard to parse visually, so I'm going to pret
}, },
"externalRegionId": "Datacenter:datacenter-39056", "externalRegionId": "Datacenter:datacenter-39056",
"cloudAccountId": "75d29635-f128-4b85-8cf9-95a9e5981c68", "cloudAccountId": "75d29635-f128-4b85-8cf9-95a9e5981c68",
"name": "QTZ", "name": "QTZ", // [tl! focus]
"id": "84470591-74a2-4659-87fd-e5d174a679a2", "id": "84470591-74a2-4659-87fd-e5d174a679a2",
"updatedAt": "2022-06-02", "updatedAt": "2022-06-02",
"organizationId": "61ebe5bf-5f55-4dee-8533-7ad05c067dd9", "organizationId": "61ebe5bf-5f55-4dee-8533-7ad05c067dd9",
@ -587,7 +595,7 @@ It did! Though that result is a bit hard to parse visually, so I'm going to pret
"href": "/iaas/api/zones/84470591-74a2-4659-87fd-e5d174a679a2" "href": "/iaas/api/zones/84470591-74a2-4659-87fd-e5d174a679a2"
}, },
"region": { "region": {
"href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f" "href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f" // [tl! focus]
}, },
"cloud-account": { "cloud-account": {
"href": "/iaas/api/cloud-accounts/75d29635-f128-4b85-8cf9-95a9e5981c68" "href": "/iaas/api/cloud-accounts/75d29635-f128-4b85-8cf9-95a9e5981c68"
@ -609,7 +617,8 @@ This action will basically just repeat the call that I tested above in `vraTeste
![vraGetZones action](vraGetZones_action.png) ![vraGetZones action](vraGetZones_action.png)
```javascript {linenos=true} ```javascript
// torchlight! {"lineNumbers": true}
/* /*
JavaScript: vraGetZones action JavaScript: vraGetZones action
Inputs: none Inputs: none
@ -639,7 +648,8 @@ Oh, and the whole thing is wrapped in a conditional so that the code only execut
|:--- |:--- |:--- | |:--- |:--- |:--- |
| `zoneName` | `string` | The name of the Zone selected in the request form | | `zoneName` | `string` | The name of the Zone selected in the request form |
```javascript {linenos=true} ```javascript
// torchlight! {"lineNumbers": true}
/* JavaScript: vraGetImages action /* JavaScript: vraGetImages action
Inputs: zoneName (string) Inputs: zoneName (string)
Return type: array/string Return type: array/string
@ -708,7 +718,8 @@ Next I'll repeat the same steps to create a new `image` input. This time, though
![Binding the input](image_input.png) ![Binding the input](image_input.png)
The full code for my template now looks like this: The full code for my template now looks like this:
```yaml {linenos=true} ```yaml
# torchlight! {"lineNumbers": true}
formatVersion: 1 formatVersion: 1
inputs: inputs:
zoneName: zoneName:

View file

@ -50,21 +50,21 @@ I've described the [process of creating a new instance on OCI in a past post](/f
### Prepare the server ### Prepare the server
Once the server's up and running, I go through the usual steps of applying any available updates: Once the server's up and running, I go through the usual steps of applying any available updates:
```command ```shell
sudo apt update sudo apt update # [tl! .cmd:1]
sudo apt upgrade sudo apt upgrade
``` ```
#### Install Tailscale #### Install Tailscale
And then I'll install Tailscale using their handy-dandy bootstrap script: And then I'll install Tailscale using their handy-dandy bootstrap script:
```command ```shell
curl -fsSL https://tailscale.com/install.sh | sh curl -fsSL https://tailscale.com/install.sh | sh # [tl! .cmd]
``` ```
When I bring up the Tailscale interface, I'll use the `--advertise-tags` flag to identify the server with an [ACL tag](https://tailscale.com/kb/1068/acl-tags/). ([Within my tailnet](/secure-networking-made-simple-with-tailscale/#acls)[^tailnet], all of my other clients are able to connect to devices bearing the `cloud` tag but `cloud` servers can only reach back to other devices for performing DNS lookups.) When I bring up the Tailscale interface, I'll use the `--advertise-tags` flag to identify the server with an [ACL tag](https://tailscale.com/kb/1068/acl-tags/). ([Within my tailnet](/secure-networking-made-simple-with-tailscale/#acls)[^tailnet], all of my other clients are able to connect to devices bearing the `cloud` tag but `cloud` servers can only reach back to other devices for performing DNS lookups.)
```command ```shell
sudo tailscale up --advertise-tags "tag:cloud" sudo tailscale up --advertise-tags "tag:cloud" # [tl! .cmd]
``` ```
[^tailnet]: [Tailscale's term](https://tailscale.com/kb/1136/tailnet/) for the private network which securely links Tailscale-connected devices. [^tailnet]: [Tailscale's term](https://tailscale.com/kb/1136/tailnet/) for the private network which securely links Tailscale-connected devices.
@ -72,26 +72,22 @@ sudo tailscale up --advertise-tags "tag:cloud"
#### Install Docker #### Install Docker
Next I install Docker and `docker-compose`: Next I install Docker and `docker-compose`:
```command ```shell
sudo apt install ca-certificates curl gnupg lsb-release sudo apt install ca-certificates curl gnupg lsb-release # [tl! .cmd:2]
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
```
```command-session
echo \ echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
``` sudo apt update # [tl! .cmd:1]
```command
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-compose docker-compose-plugin sudo apt install docker-ce docker-ce-cli containerd.io docker-compose docker-compose-plugin
``` ```
#### Configure firewall #### Configure firewall
This server automatically had an iptables firewall rule configured to permit SSH access. For Gitea, I'll also need to configure HTTP/HTTPS access. [As before](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#firewall-configuration), I need to be mindful of the explicit `REJECT all` rule at the bottom of the `INPUT` chain: This server automatically had an iptables firewall rule configured to permit SSH access. For Gitea, I'll also need to configure HTTP/HTTPS access. [As before](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#firewall-configuration), I need to be mindful of the explicit `REJECT all` rule at the bottom of the `INPUT` chain:
```command-session ```shell
sudo iptables -L INPUT --line-numbers sudo iptables -L INPUT --line-numbers # [tl! .cmd]
Chain INPUT (policy ACCEPT) Chain INPUT (policy ACCEPT) # [tl! .nocopy:8]
num target prot opt source destination num target prot opt source destination
1 ts-input all -- anywhere anywhere 1 ts-input all -- anywhere anywhere
2 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED 2 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
@ -103,32 +99,31 @@ num target prot opt source destination
``` ```
So I'll insert the new rules at line 6: So I'll insert the new rules at line 6:
```command ```shell
sudo iptables -L INPUT --line-numbers sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT # [tl! .cmd:1]
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT
sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT
``` ```
And confirm that it did what I wanted it to: And confirm that it did what I wanted it to:
```command-session ```shell
sudo iptables -L INPUT --line-numbers sudo iptables -L INPUT --line-numbers # [tl! focus .cmd]
Chain INPUT (policy ACCEPT) Chain INPUT (policy ACCEPT) # [tl! .nocopy:10]
num target prot opt source destination num target prot opt source destination
1 ts-input all -- anywhere anywhere 1 ts-input all -- anywhere anywhere
2 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED 2 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
3 ACCEPT icmp -- anywhere anywhere 3 ACCEPT icmp -- anywhere anywhere
4 ACCEPT all -- anywhere anywhere 4 ACCEPT all -- anywhere anywhere
5 ACCEPT udp -- anywhere anywhere udp spt:ntp 5 ACCEPT udp -- anywhere anywhere udp spt:ntp
6 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https 6 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https # [tl! focus:1]
7 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http 7 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http
8 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh 8 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
9 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited 9 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
``` ```
That looks good, so let's save the new rules: That looks good, so let's save the new rules:
```command-session ```shell
sudo netfilter-persistent save sudo netfilter-persistent save # [tl! .cmd]
run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save # [tl! .nocopy:1]
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save
``` ```
@ -143,19 +138,19 @@ I'm now ready to move on with installing Gitea itself.
I'll start with creating a `git` user. This account will be set as the owner of the data volume used by the Gitea container, but will also (perhaps more importantly) facilitate [SSH passthrough](https://docs.gitea.io/en-us/install-with-docker/#ssh-container-passthrough) into the container for secure git operations. I'll start with creating a `git` user. This account will be set as the owner of the data volume used by the Gitea container, but will also (perhaps more importantly) facilitate [SSH passthrough](https://docs.gitea.io/en-us/install-with-docker/#ssh-container-passthrough) into the container for secure git operations.
Here's where I create the account and also generate what will become the SSH key used by the git server: Here's where I create the account and also generate what will become the SSH key used by the git server:
```command ```shell
sudo useradd -s /bin/bash -m git sudo useradd -s /bin/bash -m git # [tl! .cmd:1]
sudo -u git ssh-keygen -t ecdsa -C "Gitea Host Key" sudo -u git ssh-keygen -t ecdsa -C "Gitea Host Key"
``` ```
The `git` user's SSH public key gets added as-is directly to that user's `authorized_keys` file: The `git` user's SSH public key gets added as-is directly to that user's `authorized_keys` file:
```command ```shell
sudo -u git cat /home/git/.ssh/id_ecdsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys sudo -u git cat /home/git/.ssh/id_ecdsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys # [tl! .cmd:1]
sudo -u git chmod 600 /home/git/.ssh/authorized_keys sudo -u git chmod 600 /home/git/.ssh/authorized_keys
``` ```
When other users add their SSH public keys into Gitea's web UI, those will get added to `authorized_keys` with a little something extra: an alternate command to perform git actions instead of just SSH ones: When other users add their SSH public keys into Gitea's web UI, those will get added to `authorized_keys` with a little something extra: an alternate command to perform git actions instead of just SSH ones:
```cfg ```text
command="/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-1",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty <user pubkey> command="/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-1",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty <user pubkey>
``` ```
@ -164,14 +159,13 @@ No users have added their keys to Gitea just yet so if you look at `/home/git/.s
{{% /notice %}} {{% /notice %}}
So I'll go ahead and create that extra command: So I'll go ahead and create that extra command:
```command-session ```shell
cat <<"EOF" | sudo tee /usr/local/bin/gitea cat <<"EOF" | sudo tee /usr/local/bin/gitea # [tl! .cmd]
#!/bin/sh #!/bin/sh
ssh -p 2222 -o StrictHostKeyChecking=no git@127.0.0.1 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@" ssh -p 2222 -o StrictHostKeyChecking=no git@127.0.0.1 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@"
EOF EOF
```
```command sudo chmod +x /usr/local/bin/gitea # [tl! .cmd]
sudo chmod +x /usr/local/bin/gitea
``` ```
So when I use a `git` command to interact with the server via SSH, the commands will get relayed into the Docker container on port 2222. So when I use a `git` command to interact with the server via SSH, the commands will get relayed into the Docker container on port 2222.
@ -180,26 +174,27 @@ So when I use a `git` command to interact with the server via SSH, the commands
That takes care of most of the prep work, so now I'm ready to create the `docker-compose.yaml` file which will tell Docker how to host Gitea. That takes care of most of the prep work, so now I'm ready to create the `docker-compose.yaml` file which will tell Docker how to host Gitea.
I'm going to place this in `/opt/gitea`: I'm going to place this in `/opt/gitea`:
```command ```shell
sudo mkdir -p /opt/gitea sudo mkdir -p /opt/gitea # [tl! .cmd:1]
cd /opt/gitea cd /opt/gitea
``` ```
And I want to be sure that my new `git` user owns the `./data` directory which will be where the git contents get stored: And I want to be sure that my new `git` user owns the `./data` directory which will be where the git contents get stored:
```command ```shell
sudo mkdir data sudo mkdir data # [tl! .cmd:1]
sudo chown git:git -R data sudo chown git:git -R data
``` ```
Now to create the file: Now to create the file:
```command ```shell
sudo vi docker-compose.yaml sudo vi docker-compose.yaml # [tl! .cmd]
``` ```
The basic contents of the file came from the [Gitea documentation for Installation with Docker](https://docs.gitea.io/en-us/install-with-docker/), but I also included some (highlighted) additional environment variables based on the [Configuration Cheat Sheet](https://docs.gitea.io/en-us/config-cheat-sheet/): The basic contents of the file came from the [Gitea documentation for Installation with Docker](https://docs.gitea.io/en-us/install-with-docker/), but I also included some (highlighted) additional environment variables based on the [Configuration Cheat Sheet](https://docs.gitea.io/en-us/config-cheat-sheet/):
`docker-compose.yaml`: `docker-compose.yaml`:
```yaml {linenos=true,hl_lines=["12-13","19-31",38,43]} ```yaml {linenos=true,hl_lines=["12-13","19-31",38,43]}
# torchlight! {"lineNumbers": true}
version: "3" version: "3"
networks: networks:
@ -211,14 +206,14 @@ services:
image: gitea/gitea:latest image: gitea/gitea:latest
container_name: gitea container_name: gitea
environment: environment:
- USER_UID=1003 - USER_UID=1003 # [tl! highlight:1]
- USER_GID=1003 - USER_GID=1003
- GITEA__database__DB_TYPE=postgres - GITEA__database__DB_TYPE=postgres
- GITEA__database__HOST=db:5432 - GITEA__database__HOST=db:5432
- GITEA__database__NAME=gitea - GITEA__database__NAME=gitea
- GITEA__database__USER=gitea - GITEA__database__USER=gitea
- GITEA__database__PASSWD=gitea - GITEA__database__PASSWD=gitea
- GITEA____APP_NAME=Gitea - GITEA____APP_NAME=Gitea # [tl! highlight:start]
- GITEA__log__MODE=file - GITEA__log__MODE=file
- GITEA__openid__ENABLE_OPENID_SIGNIN=false - GITEA__openid__ENABLE_OPENID_SIGNIN=false
- GITEA__other__SHOW_FOOTER_VERSION=false - GITEA__other__SHOW_FOOTER_VERSION=false
@ -230,19 +225,19 @@ services:
- GITEA__server__LANDING_PAGE=explore - GITEA__server__LANDING_PAGE=explore
- GITEA__service__DISABLE_REGISTRATION=true - GITEA__service__DISABLE_REGISTRATION=true
- GITEA__service_0X2E_explore__DISABLE_USERS_PAGE=true - GITEA__service_0X2E_explore__DISABLE_USERS_PAGE=true
- GITEA__ui__DEFAULT_THEME=arc-green - GITEA__ui__DEFAULT_THEME=arc-green # [tl! highlight:end]
restart: always restart: always
networks: networks:
- gitea - gitea
volumes: volumes:
- ./data:/data - ./data:/data
- /home/git/.ssh/:/data/git/.ssh - /home/git/.ssh/:/data/git/.ssh # [tl! highlight]
- /etc/timezone:/etc/timezone:ro - /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro - /etc/localtime:/etc/localtime:ro
ports: ports:
- "3000:3000" - "3000:3000"
- "127.0.0.1:2222:22" - "127.0.0.1:2222:22" # [tl! highlight]
depends_on: depends_on:
- db - db
@ -285,21 +280,22 @@ Let's go through the extra configs in a bit more detail:
Beyond the environment variables, I also defined a few additional options to allow the SSH passthrough to function. Mounting the `git` user's SSH config directory into the container will ensure that user keys defined in Gitea will also be reflected outside of the container, and setting the container to listen on local port `2222` will allow it to receive the forwarded SSH connections: Beyond the environment variables, I also defined a few additional options to allow the SSH passthrough to function. Mounting the `git` user's SSH config directory into the container will ensure that user keys defined in Gitea will also be reflected outside of the container, and setting the container to listen on local port `2222` will allow it to receive the forwarded SSH connections:
```yaml ```yaml
volumes: volumes: # [tl! focus]
[...] - ./data:/data
- /home/git/.ssh/:/data/git/.ssh - /home/git/.ssh/:/data/git/.ssh # [tl! focus]
[...] - /etc/timezone:/etc/timezone:ro
ports: - /etc/localtime:/etc/localtime:ro
[...] ports: # [tl! focus]
- "127.0.0.1:2222:22" - "3000:3000"
- "127.0.0.1:2222:22" # [tl! focus]
``` ```
With the config in place, I'm ready to fire it up: With the config in place, I'm ready to fire it up:
#### Start containers #### Start containers
Starting Gitea is as simple as Starting Gitea is as simple as
```command ```shell
sudo docker-compose up -d sudo docker-compose up -d # [tl! .cmd]
``` ```
which will spawn both the Gitea server as well as a `postgres` database to back it. which will spawn both the Gitea server as well as a `postgres` database to back it.
@ -311,8 +307,8 @@ I've [written before](/federated-matrix-server-synapse-on-oracle-clouds-free-tie
#### Install Caddy #### Install Caddy
So exactly how simple does Caddy make this? Well let's start with installing Caddy on the system: So exactly how simple does Caddy make this? Well let's start with installing Caddy on the system:
```command ```shell
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https # [tl! .cmd:4]
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update sudo apt update
@ -321,14 +317,14 @@ sudo apt install caddy
#### Configure Caddy #### Configure Caddy
Configuring Caddy is as simple as creating a Caddyfile: Configuring Caddy is as simple as creating a Caddyfile:
```command ```shell
sudo vi /etc/caddy/Caddyfile sudo vi /etc/caddy/Caddyfile # [tl! .cmd]
``` ```
Within that file, I tell it which fully-qualified domain name(s) I'd like it to respond to (and manage SSL certificates for), as well as that I'd like it to function as a reverse proxy and send the incoming traffic to the same port `3000` that used by the Docker container: Within that file, I tell it which fully-qualified domain name(s) I'd like it to respond to (and manage SSL certificates for), as well as that I'd like it to function as a reverse proxy and send the incoming traffic to the same port `3000` that used by the Docker container:
```caddy ```text
git.bowdre.net { git.bowdre.net {
reverse_proxy localhost:3000 reverse_proxy localhost:3000
} }
``` ```
@ -336,8 +332,8 @@ That's it. I don't need to worry about headers or ACME configurations or anythin
#### Start Caddy #### Start Caddy
All that's left at this point is to start up Caddy: All that's left at this point is to start up Caddy:
```command ```shell
sudo systemctl enable caddy sudo systemctl enable caddy # [tl! .cmd:2]
sudo systemctl start caddy sudo systemctl start caddy
sudo systemctl restart caddy sudo systemctl restart caddy
``` ```
@ -363,14 +359,14 @@ And then I can log out and log back in with my new non-admin identity!
#### Add SSH public key #### Add SSH public key
Associating a public key with my new Gitea account will allow me to easily authenticate my pushes from the command line. I can create a new SSH public/private keypair by following [GitHub's instructions](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent): Associating a public key with my new Gitea account will allow me to easily authenticate my pushes from the command line. I can create a new SSH public/private keypair by following [GitHub's instructions](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent):
```command ```shell
ssh-keygen -t ed25519 -C "user@example.com" ssh-keygen -t ed25519 -C "user@example.com" # [tl! .cmd]
``` ```
I'll view the contents of the public key - and go ahead and copy the output for future use: I'll view the contents of the public key - and go ahead and copy the output for future use:
```command-session ```shell
cat ~/.ssh/id_ed25519.pub cat ~/.ssh/id_ed25519.pub # [tl! .cmd]
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5ExSsQfr6pAFBEZ7yx0oljSnpnOixvp8DS26STcx2J user@example.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5ExSsQfr6pAFBEZ7yx0oljSnpnOixvp8DS26STcx2J user@example.com # [tl! .nocopy]
``` ```
Back in the Gitea UI, I'll click the user menu up top and select **Settings**, then the *SSH / GPG Keys* tab, and click the **Add Key** button: Back in the Gitea UI, I'll click the user menu up top and select **Settings**, then the *SSH / GPG Keys* tab, and click the **Add Key** button:
@ -381,9 +377,9 @@ Back in the Gitea UI, I'll click the user menu up top and select **Settings**, t
I can give the key a name and then paste in that public key, and then click the lower **Add Key** button to insert the new key. I can give the key a name and then paste in that public key, and then click the lower **Add Key** button to insert the new key.
To verify that the SSH passthrough magic I [configured earlier](#prepare-git-user) is working, I can take a look at `git`'s `authorized_keys` file: To verify that the SSH passthrough magic I [configured earlier](#prepare-git-user) is working, I can take a look at `git`'s `authorized_keys` file:
```command-session ```shell
sudo tail -2 /home/git/.ssh/authorized_keys sudo tail -2 /home/git/.ssh/authorized_keys # [tl! .cmd]
# gitea public key # gitea public key [tl! .nocopy:1]
command="/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-3",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,no-user-rc,restrict ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5ExSsQfr6pAFBEZ7yx0oljSnpnOixvp8DS26STcx2J user@example.com command="/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-3",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,no-user-rc,restrict ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5ExSsQfr6pAFBEZ7yx0oljSnpnOixvp8DS26STcx2J user@example.com
``` ```
@ -395,8 +391,8 @@ I'm already limiting this server's exposure by blocking inbound SSH (except for
[Fail2ban](https://www.fail2ban.org/wiki/index.php/Main_Page) can help with that by monitoring log files for repeated authentication failures and then creating firewall rules to block the offender. [Fail2ban](https://www.fail2ban.org/wiki/index.php/Main_Page) can help with that by monitoring log files for repeated authentication failures and then creating firewall rules to block the offender.
Installing Fail2ban is simple: Installing Fail2ban is simple:
```command ```shell
sudo apt update sudo apt update # [tl! .cmd:1]
sudo apt install fail2ban sudo apt install fail2ban
``` ```
@ -411,10 +407,11 @@ Specifically, I'll want to watch `log/gitea.log` for messages like the following
``` ```
So let's create that filter: So let's create that filter:
```command ```shell
sudo vi /etc/fail2ban/filter.d/gitea.conf sudo vi /etc/fail2ban/filter.d/gitea.conf # [tl! .cmd]
``` ```
```cfg ```ini
# torchlight! {"lineNumbers": true}
# /etc/fail2ban/filter.d/gitea.conf # /etc/fail2ban/filter.d/gitea.conf
[Definition] [Definition]
failregex = .*(Failed authentication attempt|invalid credentials).* from <HOST> failregex = .*(Failed authentication attempt|invalid credentials).* from <HOST>
@ -422,10 +419,11 @@ ignoreregex =
``` ```
Next I create the jail, which tells Fail2ban what to do: Next I create the jail, which tells Fail2ban what to do:
```command ```shell
sudo vi /etc/fail2ban/jail.d/gitea.conf sudo vi /etc/fail2ban/jail.d/gitea.conf # [tl! .cmd]
``` ```
```cfg ```ini
# torchlight! {"lineNumbers": true}
# /etc/fail2ban/jail.d/gitea.conf # /etc/fail2ban/jail.d/gitea.conf
[gitea] [gitea]
enabled = true enabled = true
@ -440,15 +438,15 @@ action = iptables-allports
This configures Fail2ban to watch the log file (`logpath`) inside the data volume mounted to the Gitea container for messages which match the pattern I just configured (`gitea`). If a system fails to log in 5 times (`maxretry`) within 1 hour (`findtime`, in seconds) then the offending IP will be banned for 1 day (`bantime`, in seconds). This configures Fail2ban to watch the log file (`logpath`) inside the data volume mounted to the Gitea container for messages which match the pattern I just configured (`gitea`). If a system fails to log in 5 times (`maxretry`) within 1 hour (`findtime`, in seconds) then the offending IP will be banned for 1 day (`bantime`, in seconds).
Then I just need to enable and start Fail2ban: Then I just need to enable and start Fail2ban:
```command ```shell
sudo systemctl enable fail2ban sudo systemctl enable fail2ban # [tl! .cmd:1]
sudo systemctl start fail2ban sudo systemctl start fail2ban
``` ```
To verify that it's working, I can deliberately fail to log in to the web interface and watch `/var/log/fail2ban.log`: To verify that it's working, I can deliberately fail to log in to the web interface and watch `/var/log/fail2ban.log`:
```command-session ```shell
sudo tail -f /var/log/fail2ban.log sudo tail -f /var/log/fail2ban.log # [tl! .cmd]
2022-07-17 21:52:26,978 fail2ban.filter [36042]: INFO [gitea] Found ${MY_HOME_IP}| - 2022-07-17 21:52:26 2022-07-17 21:52:26,978 fail2ban.filter [36042]: INFO [gitea] Found ${MY_HOME_IP}| - 2022-07-17 21:52:26 # [tl! .nocopy]
``` ```
Excellent, let's now move on to creating some content. Excellent, let's now move on to creating some content.
@ -480,8 +478,8 @@ Once it's created, the new-but-empty repository gives me instructions on how I c
![Empty repository](empty_repo.png) ![Empty repository](empty_repo.png)
Now I can follow the instructions to initialize my local Obsidian vault (stored at `~/obsidian-vault/`) as a git repository and perform my initial push to Gitea: Now I can follow the instructions to initialize my local Obsidian vault (stored at `~/obsidian-vault/`) as a git repository and perform my initial push to Gitea:
```command ```shell
cd ~/obsidian-vault/ cd ~/obsidian-vault/ # [tl! .cmd:5]
git init git init
git add . git add .
git commit -m "initial commit" git commit -m "initial commit"

View file

@ -23,13 +23,14 @@ If you'd just like to import a working phpIPAM integration into your environment
Before even worrying about the SDK, I needed to [get a phpIPAM instance ready](https://phpipam.net/documents/installation/). I started with a small (1vCPU/1GB RAM/16GB HDD) VM attached to my "Home" network (`192.168.1.0/24`). I installed Ubuntu 20.04.1 LTS, and then used [this guide](https://computingforgeeks.com/install-and-configure-phpipam-on-ubuntu-debian-linux/) to install phpIPAM. Before even worrying about the SDK, I needed to [get a phpIPAM instance ready](https://phpipam.net/documents/installation/). I started with a small (1vCPU/1GB RAM/16GB HDD) VM attached to my "Home" network (`192.168.1.0/24`). I installed Ubuntu 20.04.1 LTS, and then used [this guide](https://computingforgeeks.com/install-and-configure-phpipam-on-ubuntu-debian-linux/) to install phpIPAM.
Once phpIPAM was running and accessible via the web interface, I then used `openssl` to generate a self-signed certificate to be used for the SSL API connection: Once phpIPAM was running and accessible via the web interface, I then used `openssl` to generate a self-signed certificate to be used for the SSL API connection:
```command ```shell
sudo mkdir /etc/apache2/certificate sudo mkdir /etc/apache2/certificate # [tl! .cmd:2]
cd /etc/apache2/certificate/ cd /etc/apache2/certificate/
sudo openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out apache-certificate.crt -keyout apache.key sudo openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out apache-certificate.crt -keyout apache.key
``` ```
I edited the apache config file to bind that new certificate on port 443, and to redirect requests on port 80 to port 443: I edited the apache config file to bind that new certificate on port 443, and to redirect requests on port 80 to port 443:
```apache {linenos=true} ```text
# torchlight! {"lineNumbers": true}
<VirtualHost *:80> <VirtualHost *:80>
ServerName ipam.lab.bowdre.net ServerName ipam.lab.bowdre.net
Redirect permanent / https://ipam.lab.bowdre.net Redirect permanent / https://ipam.lab.bowdre.net
@ -54,7 +55,8 @@ After restarting apache, I verified that hitting `http://ipam.lab.bowdre.net` re
Remember how I've got a "Home" network as well as [several internal networks](/vmware-home-lab-on-intel-nuc-9#networking) which only exist inside the lab environment? I dropped the phpIPAM instance on the Home network to make it easy to connect to, but it doesn't know how to talk to the internal networks where vRA will actually be deploying the VMs. So I added a static route to let it know that traffic to `172.16.0.0/16` would have to go through the Vyos router at `192.168.1.100`. Remember how I've got a "Home" network as well as [several internal networks](/vmware-home-lab-on-intel-nuc-9#networking) which only exist inside the lab environment? I dropped the phpIPAM instance on the Home network to make it easy to connect to, but it doesn't know how to talk to the internal networks where vRA will actually be deploying the VMs. So I added a static route to let it know that traffic to `172.16.0.0/16` would have to go through the Vyos router at `192.168.1.100`.
This is Ubuntu, so I edited `/etc/netplan/99-netcfg-vmware.yaml` to add the `routes` section at the bottom: This is Ubuntu, so I edited `/etc/netplan/99-netcfg-vmware.yaml` to add the `routes` section at the bottom:
```yaml {linenos=true,hl_lines="17-20"} ```yaml
# torchlight! {"lineNumbers": true}
# /etc/netplan/99-netcfg-vmware.yaml # /etc/netplan/99-netcfg-vmware.yaml
network: network:
version: 2 version: 2
@ -71,24 +73,23 @@ network:
- lab.bowdre.net - lab.bowdre.net
addresses: addresses:
- 192.168.1.5 - 192.168.1.5
routes: routes: # [tl! focus:3]
- to: 172.16.0.0/16 - to: 172.16.0.0/16
via: 192.168.1.100 via: 192.168.1.100
metric: 100 metric: 100
``` ```
I then ran `sudo netplan apply` so the change would take immediate effect and confirmed the route was working by pinging the vCenter's interface on the `172.16.10.0/24` network: I then ran `sudo netplan apply` so the change would take immediate effect and confirmed the route was working by pinging the vCenter's interface on the `172.16.10.0/24` network:
```command ```shell
sudo netplan apply sudo netplan apply # [tl! .cmd]
``` ```
```command-session ```shell
ip route ip route # [tl! .cmd]
default via 192.168.1.1 dev ens160 proto static default via 192.168.1.1 dev ens160 proto static # [tl! .nocopy:3]
172.16.0.0/16 via 192.168.1.100 dev ens160 proto static metric 100 172.16.0.0/16 via 192.168.1.100 dev ens160 proto static metric 100
192.168.1.0/24 dev ens160 proto kernel scope link src 192.168.1.14 192.168.1.0/24 dev ens160 proto kernel scope link src 192.168.1.14
```
```command-session ping 172.16.10.12 # [tl! .cmd]
ping 172.16.10.12 PING 172.16.10.12 (172.16.10.12) 56(84) bytes of data. # [tl! .nocopy:7]
PING 172.16.10.12 (172.16.10.12) 56(84) bytes of data.
64 bytes from 172.16.10.12: icmp_seq=1 ttl=64 time=0.282 ms 64 bytes from 172.16.10.12: icmp_seq=1 ttl=64 time=0.282 ms
64 bytes from 172.16.10.12: icmp_seq=2 ttl=64 time=0.256 ms 64 bytes from 172.16.10.12: icmp_seq=2 ttl=64 time=0.256 ms
64 bytes from 172.16.10.12: icmp_seq=3 ttl=64 time=0.241 ms 64 bytes from 172.16.10.12: icmp_seq=3 ttl=64 time=0.241 ms
@ -99,7 +100,7 @@ rtt min/avg/max/mdev = 0.241/0.259/0.282/0.016 ms
``` ```
Now would also be a good time to go ahead and enable cron jobs so that phpIPAM will automatically scan its defined subnets for changes in IP availability and device status. phpIPAM includes a pair of scripts in `INSTALL_DIR/functions/scripts/`: one for discovering new hosts, and the other for checking the status of previously discovered hosts. So I ran `sudo crontab -e` to edit root's crontab and pasted in these two lines to call both scripts every 15 minutes: Now would also be a good time to go ahead and enable cron jobs so that phpIPAM will automatically scan its defined subnets for changes in IP availability and device status. phpIPAM includes a pair of scripts in `INSTALL_DIR/functions/scripts/`: one for discovering new hosts, and the other for checking the status of previously discovered hosts. So I ran `sudo crontab -e` to edit root's crontab and pasted in these two lines to call both scripts every 15 minutes:
```cron ```text
*/15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/discoveryCheck.php */15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/discoveryCheck.php
*/15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/pingCheck.php */15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/pingCheck.php
``` ```
@ -205,9 +206,10 @@ Now that I know how to talk to phpIPAM via its RESP API, it's time to figure out
I downloaded the SDK from [here](https://code.vmware.com/web/sdk/1.1.0/vmware-vrealize-automation-third-party-ipam-sdk). It's got a pretty good [README](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/README_VMware.md) which describes the requirements (Java 8+, Maven 3, Python3, Docker, internet access) as well as how to build the package. I also consulted [this white paper](https://docs.vmware.com/en/vRealize-Automation/8.2/ipam_integration_contract_reqs.pdf) which describes the inputs provided by vRA and the outputs expected from the IPAM integration. I downloaded the SDK from [here](https://code.vmware.com/web/sdk/1.1.0/vmware-vrealize-automation-third-party-ipam-sdk). It's got a pretty good [README](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/README_VMware.md) which describes the requirements (Java 8+, Maven 3, Python3, Docker, internet access) as well as how to build the package. I also consulted [this white paper](https://docs.vmware.com/en/vRealize-Automation/8.2/ipam_integration_contract_reqs.pdf) which describes the inputs provided by vRA and the outputs expected from the IPAM integration.
The README tells you to extract the .zip and make a simple modification to the `pom.xml` file to "brand" the integration: The README tells you to extract the .zip and make a simple modification to the `pom.xml` file to "brand" the integration:
```xml {linenos=true,hl_lines="2-4"} ```xml
# torchlight! {"lineNumbers": true}
<properties> <properties>
<provider.name>phpIPAM</provider.name> <provider.name>phpIPAM</provider.name> <!-- [tl! focus:2] -->
<provider.description>phpIPAM integration for vRA</provider.description> <provider.description>phpIPAM integration for vRA</provider.description>
<provider.version>1.0.3</provider.version> <provider.version>1.0.3</provider.version>
@ -221,7 +223,8 @@ The README tells you to extract the .zip and make a simple modification to the `
You can then kick off the build with `mvn package -PcollectDependencies -Duser.id=${UID}`, which will (eventually) spit out `./target/phpIPAM.zip`. You can then [import the package to vRA](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-410899CA-1B02-4507-96AD-DFE622D2DD47.html) and test it against the `httpbin.org` hostname to validate that the build process works correctly. You can then kick off the build with `mvn package -PcollectDependencies -Duser.id=${UID}`, which will (eventually) spit out `./target/phpIPAM.zip`. You can then [import the package to vRA](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-410899CA-1B02-4507-96AD-DFE622D2DD47.html) and test it against the `httpbin.org` hostname to validate that the build process works correctly.
You'll notice that the form includes fields for Username, Password, and Hostname; we'll also need to specify the API app ID. This can be done by editing `./src/main/resources/endpoint-schema.json`. I added an `apiAppId` field: You'll notice that the form includes fields for Username, Password, and Hostname; we'll also need to specify the API app ID. This can be done by editing `./src/main/resources/endpoint-schema.json`. I added an `apiAppId` field:
```json {linenos=true,hl_lines=[12,38]} ```json
// torchlight! {"lineNumbers":true}
{ {
"layout":{ "layout":{
"pages":[ "pages":[
@ -233,7 +236,7 @@ You'll notice that the form includes fields for Username, Password, and Hostname
"id":"section_1", "id":"section_1",
"fields":[ "fields":[
{ {
"id":"apiAppId", "id":"apiAppId", // [tl! focus]
"display":"textField" "display":"textField"
}, },
{ {
@ -259,7 +262,7 @@ You'll notice that the form includes fields for Username, Password, and Hostname
"type":{ "type":{
"dataType":"string" "dataType":"string"
}, },
"label":"API App ID", "label":"API App ID", // [tl! focus]
"constraints":{ "constraints":{
"required":true "required":true
} }
@ -321,7 +324,8 @@ Example payload:
``` ```
The `do_validate_endpoint` function has a handy comment letting us know that's where we'll drop in our code: The `do_validate_endpoint` function has a handy comment letting us know that's where we'll drop in our code:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
def do_validate_endpoint(self, auth_credentials, cert): def do_validate_endpoint(self, auth_credentials, cert):
# Your implemention goes here # Your implemention goes here
@ -332,7 +336,8 @@ def do_validate_endpoint(self, auth_credentials, cert):
response = requests.get("https://" + self.inputs["endpointProperties"]["hostName"], verify=cert, auth=(username, password)) response = requests.get("https://" + self.inputs["endpointProperties"]["hostName"], verify=cert, auth=(username, password))
``` ```
The example code gives us a nice start at how we'll get our inputs from vRA. So let's expand that a bit: The example code gives us a nice start at how we'll get our inputs from vRA. So let's expand that a bit:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
def do_validate_endpoint(self, auth_credentials, cert): def do_validate_endpoint(self, auth_credentials, cert):
# Build variables # Build variables
username = auth_credentials["privateKeyId"] username = auth_credentials["privateKeyId"]
@ -341,19 +346,22 @@ def do_validate_endpoint(self, auth_credentials, cert):
apiAppId = self.inputs["endpointProperties"]["apiAppId"] apiAppId = self.inputs["endpointProperties"]["apiAppId"]
``` ```
As before, we'll construct the "base" URI by inserting the `hostname` and `apiAppId`, and we'll combine the `username` and `password` into our `auth` variable: As before, we'll construct the "base" URI by inserting the `hostname` and `apiAppId`, and we'll combine the `username` and `password` into our `auth` variable:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
uri = f'https://{hostname}/api/{apiAppId}/ uri = f'https://{hostname}/api/{apiAppId}/
auth = (username, password) auth = (username, password)
``` ```
I realized that I'd be needing to do the same authentication steps for each one of these operations, so I created a new `auth_session()` function to do the heavy lifting. Other operations will also need to return the authorization token but for this run we really just need to know whether the authentication was successful, which we can do by checking `req.status_code`. I realized that I'd be needing to do the same authentication steps for each one of these operations, so I created a new `auth_session()` function to do the heavy lifting. Other operations will also need to return the authorization token but for this run we really just need to know whether the authentication was successful, which we can do by checking `req.status_code`.
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
def auth_session(uri, auth, cert): def auth_session(uri, auth, cert):
auth_uri = f'{uri}/user/' auth_uri = f'{uri}/user/'
req = requests.post(auth_uri, auth=auth, verify=cert) req = requests.post(auth_uri, auth=auth, verify=cert)
return req return req
``` ```
And we'll call that function from `do_validate_endpoint()`: And we'll call that function from `do_validate_endpoint()`:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
# Test auth connection # Test auth connection
try: try:
response = auth_session(uri, auth, cert) response = auth_session(uri, auth, cert)
@ -372,7 +380,8 @@ After completing each operation, run `mvn package -PcollectDependencies -Duser.i
Confirm that everything worked correctly by hopping over to the **Extensibility** tab, selecting **Action Runs** on the left, and changing the **User Runs** filter to say *Integration Runs*. Confirm that everything worked correctly by hopping over to the **Extensibility** tab, selecting **Action Runs** on the left, and changing the **User Runs** filter to say *Integration Runs*.
![Extensibility action runs](e4PTJxfqH.png) ![Extensibility action runs](e4PTJxfqH.png)
Select the newest `phpIPAM_ValidateEndpoint` action and make sure it has a happy green *Completed* status. You can also review the Inputs to make sure they look like what you expected: Select the newest `phpIPAM_ValidateEndpoint` action and make sure it has a happy green *Completed* status. You can also review the Inputs to make sure they look like what you expected:
```json {linenos=true} ```json
// torchlight! {"lineNumbers": true}
{ {
"__metadata": { "__metadata": {
"headers": { "headers": {
@ -399,7 +408,8 @@ That's one operation in the bank!
### Step 6: 'Get IP Ranges' action ### Step 6: 'Get IP Ranges' action
So vRA can authenticate against phpIPAM; next, let's actually query to get a list of available IP ranges. This happens in `./src/main/python/get_ip_ranges/source.py`. We'll start by pulling over our `auth_session()` function and flesh it out a bit more to return the authorization token: So vRA can authenticate against phpIPAM; next, let's actually query to get a list of available IP ranges. This happens in `./src/main/python/get_ip_ranges/source.py`. We'll start by pulling over our `auth_session()` function and flesh it out a bit more to return the authorization token:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
def auth_session(uri, auth, cert): def auth_session(uri, auth, cert):
auth_uri = f'{uri}/user/' auth_uri = f'{uri}/user/'
req = requests.post(auth_uri, auth=auth, verify=cert) req = requests.post(auth_uri, auth=auth, verify=cert)
@ -409,7 +419,8 @@ def auth_session(uri, auth, cert):
return token return token
``` ```
We'll then modify `do_get_ip_ranges()` with our needed variables, and then call `auth_session()` to get the necessary token: We'll then modify `do_get_ip_ranges()` with our needed variables, and then call `auth_session()` to get the necessary token:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
def do_get_ip_ranges(self, auth_credentials, cert): def do_get_ip_ranges(self, auth_credentials, cert):
# Build variables # Build variables
username = auth_credentials["privateKeyId"] username = auth_credentials["privateKeyId"]
@ -423,7 +434,8 @@ def do_get_ip_ranges(self, auth_credentials, cert):
token = auth_session(uri, auth, cert) token = auth_session(uri, auth, cert)
``` ```
We can then query for the list of subnets, just like we did earlier: We can then query for the list of subnets, just like we did earlier:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
# Request list of subnets # Request list of subnets
subnet_uri = f'{uri}/subnets/' subnet_uri = f'{uri}/subnets/'
ipRanges = [] ipRanges = []
@ -434,7 +446,8 @@ I decided to add the extra `filter_by=isPool&filter_value=1` argument to the que
{{% notice note "Update" %}} {{% notice note "Update" %}}
I now filter for networks identified by the designated custom field like so: I now filter for networks identified by the designated custom field like so:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
# Request list of subnets # Request list of subnets
subnet_uri = f'{uri}/subnets/' subnet_uri = f'{uri}/subnets/'
if enableFilter == "true": if enableFilter == "true":
@ -452,7 +465,8 @@ I now filter for networks identified by the designated custom field like so:
Now is a good time to consult [that white paper](https://docs.vmware.com/en/VMware-Cloud-services/1.0/ipam_integration_contract_reqs.pdf) to confirm what fields I'll need to return to vRA. That lets me know that I'll need to return `ipRanges` which is a list of `IpRange` objects. `IpRange` requires `id`, `name`, `startIPAddress`, `endIPAddress`, `ipVersion`, and `subnetPrefixLength` properties. It can also accept `description`, `gatewayAddress`, and `dnsServerAddresses` properties, among others. Some of these properties are returned directly by the phpIPAM API, but others will need to be computed on the fly. Now is a good time to consult [that white paper](https://docs.vmware.com/en/VMware-Cloud-services/1.0/ipam_integration_contract_reqs.pdf) to confirm what fields I'll need to return to vRA. That lets me know that I'll need to return `ipRanges` which is a list of `IpRange` objects. `IpRange` requires `id`, `name`, `startIPAddress`, `endIPAddress`, `ipVersion`, and `subnetPrefixLength` properties. It can also accept `description`, `gatewayAddress`, and `dnsServerAddresses` properties, among others. Some of these properties are returned directly by the phpIPAM API, but others will need to be computed on the fly.
For instance, these are pretty direct matches: For instance, these are pretty direct matches:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
ipRange['id'] = str(subnet['id']) ipRange['id'] = str(subnet['id'])
ipRange['description'] = str(subnet['description']) ipRange['description'] = str(subnet['description'])
ipRange['subnetPrefixLength'] = str(subnet['mask']) ipRange['subnetPrefixLength'] = str(subnet['mask'])
@ -463,32 +477,37 @@ ipRange['name'] = f"{str(subnet['subnet'])}/{str(subnet['mask'])}"
``` ```
Working with IP addresses in Python can be greatly simplified by use of the `ipaddress` module, so I added an `import ipaddress` statement near the top of the file. I also added it to `requirements.txt` to make sure it gets picked up by the Maven build. I can then use that to figure out the IP version as well as computing reasonable start and end IP addresses: Working with IP addresses in Python can be greatly simplified by use of the `ipaddress` module, so I added an `import ipaddress` statement near the top of the file. I also added it to `requirements.txt` to make sure it gets picked up by the Maven build. I can then use that to figure out the IP version as well as computing reasonable start and end IP addresses:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
network = ipaddress.ip_network(str(subnet['subnet']) + '/' + str(subnet['mask'])) network = ipaddress.ip_network(str(subnet['subnet']) + '/' + str(subnet['mask']))
ipRange['ipVersion'] = 'IPv' + str(network.version) ipRange['ipVersion'] = 'IPv' + str(network.version)
ipRange['startIPAddress'] = str(network[1]) ipRange['startIPAddress'] = str(network[1])
ipRange['endIPAddress'] = str(network[-2]) ipRange['endIPAddress'] = str(network[-2])
``` ```
I'd like to try to get the DNS servers from phpIPAM if they're defined, but I also don't want the whole thing to puke if a subnet doesn't have that defined. phpIPAM returns the DNS servers as a semicolon-delineated string; I need them to look like a Python list: I'd like to try to get the DNS servers from phpIPAM if they're defined, but I also don't want the whole thing to puke if a subnet doesn't have that defined. phpIPAM returns the DNS servers as a semicolon-delineated string; I need them to look like a Python list:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
try: try:
ipRange['dnsServerAddresses'] = [server.strip() for server in str(subnet['nameservers']['namesrv1']).split(';')] ipRange['dnsServerAddresses'] = [server.strip() for server in str(subnet['nameservers']['namesrv1']).split(';')]
except: except:
ipRange['dnsServerAddresses'] = [] ipRange['dnsServerAddresses'] = []
``` ```
I can also nest another API request to find which address is marked as the gateway for a given subnet: I can also nest another API request to find which address is marked as the gateway for a given subnet:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
gw_req = requests.get(f"{subnet_uri}/{subnet['id']}/addresses/?filter_by=is_gateway&filter_value=1", headers=token, verify=cert) gw_req = requests.get(f"{subnet_uri}/{subnet['id']}/addresses/?filter_by=is_gateway&filter_value=1", headers=token, verify=cert)
if gw_req.status_code == 200: if gw_req.status_code == 200:
gateway = gw_req.json()['data'][0]['ip'] gateway = gw_req.json()['data'][0]['ip']
ipRange['gatewayAddress'] = gateway ipRange['gatewayAddress'] = gateway
``` ```
And then I merge each of these `ipRange` objects into the `ipRanges` list which will be returned to vRA: And then I merge each of these `ipRange` objects into the `ipRanges` list which will be returned to vRA:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
ipRanges.append(ipRange) ipRanges.append(ipRange)
``` ```
After rearranging a bit and tossing in some logging, here's what I've got: After rearranging a bit and tossing in some logging, here's what I've got:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
for subnet in subnets: for subnet in subnets:
ipRange = {} ipRange = {}
ipRange['id'] = str(subnet['id']) ipRange['id'] = str(subnet['id'])
@ -523,7 +542,7 @@ The full code can be found [here](https://github.com/jbowdre/phpIPAM-for-vRA8/bl
In any case, it's time to once again use `mvn package -PcollectDependencies -Duser.id=${UID}` to fire off the build, and then import `phpIPAM.zip` into vRA. In any case, it's time to once again use `mvn package -PcollectDependencies -Duser.id=${UID}` to fire off the build, and then import `phpIPAM.zip` into vRA.
vRA runs the `phpIPAM_GetIPRanges` action about every ten minutes so keep checking back on the **Extensibility > Action Runs** view until it shows up. You can then select the action and review the Log to see which IP ranges got picked up: vRA runs the `phpIPAM_GetIPRanges` action about every ten minutes so keep checking back on the **Extensibility > Action Runs** view until it shows up. You can then select the action and review the Log to see which IP ranges got picked up:
```log ```
[2021-02-21 23:14:04,026] [INFO] - Querying for auth credentials [2021-02-21 23:14:04,026] [INFO] - Querying for auth credentials
[2021-02-21 23:14:04,051] [INFO] - Credentials obtained successfully! [2021-02-21 23:14:04,051] [INFO] - Credentials obtained successfully!
[2021-02-21 23:14:04,089] [INFO] - Found subnet: 172.16.10.0/24 - 1610-Management. [2021-02-21 23:14:04,089] [INFO] - Found subnet: 172.16.10.0/24 - 1610-Management.
@ -544,7 +563,8 @@ Next, we need to figure out how to allocate an IP.
### Step 7: 'Allocate IP' action ### Step 7: 'Allocate IP' action
I think we've got a rhythm going now. So we'll dive in to `./src/main/python/allocate_ip/source.py`, create our `auth_session()` function, and add our variables to the `do_allocate_ip()` function. I also created a new `bundle` object to hold the `uri`, `token`, and `cert` items so that I don't have to keep typing those over and over and over. I think we've got a rhythm going now. So we'll dive in to `./src/main/python/allocate_ip/source.py`, create our `auth_session()` function, and add our variables to the `do_allocate_ip()` function. I also created a new `bundle` object to hold the `uri`, `token`, and `cert` items so that I don't have to keep typing those over and over and over.
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
def auth_session(uri, auth, cert): def auth_session(uri, auth, cert):
auth_uri = f'{uri}/user/' auth_uri = f'{uri}/user/'
req = requests.post(auth_uri, auth=auth, verify=cert) req = requests.post(auth_uri, auth=auth, verify=cert)
@ -571,7 +591,8 @@ def do_allocate_ip(self, auth_credentials, cert):
} }
``` ```
I left the remainder of `do_allocate_ip()` intact but modified its calls to other functions so that my new `bundle` would be included: I left the remainder of `do_allocate_ip()` intact but modified its calls to other functions so that my new `bundle` would be included:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
allocation_result = [] allocation_result = []
try: try:
resource = self.inputs["resourceInfo"] resource = self.inputs["resourceInfo"]
@ -586,7 +607,8 @@ except Exception as e:
raise e raise e
``` ```
I also added `bundle` to the `allocate()` function: I also added `bundle` to the `allocate()` function:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
def allocate(resource, allocation, context, endpoint, bundle): def allocate(resource, allocation, context, endpoint, bundle):
last_error = None last_error = None
@ -603,7 +625,8 @@ def allocate(resource, allocation, context, endpoint, bundle):
raise last_error raise last_error
``` ```
The heavy lifting is actually handled in `allocate_in_range()`. Right now, my implementation only supports doing a single allocation so I added an escape in case someone asks to do something crazy like allocate *2* IPs. I then set up my variables: The heavy lifting is actually handled in `allocate_in_range()`. Right now, my implementation only supports doing a single allocation so I added an escape in case someone asks to do something crazy like allocate *2* IPs. I then set up my variables:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
def allocate_in_range(range_id, resource, allocation, context, endpoint, bundle): def allocate_in_range(range_id, resource, allocation, context, endpoint, bundle):
if int(allocation['size']) ==1: if int(allocation['size']) ==1:
vmName = resource['name'] vmName = resource['name']
@ -626,13 +649,15 @@ payload = {
That timestamp will be handy when reviewing the reservations from the phpIPAM side of things. Be sure to add an appropriate `import datetime` statement at the top of this file, and include `datetime` in `requirements.txt`. That timestamp will be handy when reviewing the reservations from the phpIPAM side of things. Be sure to add an appropriate `import datetime` statement at the top of this file, and include `datetime` in `requirements.txt`.
So now we'll construct the URI and post the allocation request to phpIPAM. We tell it which `range_id` to use and it will return the first available IP. So now we'll construct the URI and post the allocation request to phpIPAM. We tell it which `range_id` to use and it will return the first available IP.
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
allocate_uri = f'{uri}/addresses/first_free/{str(range_id)}/' allocate_uri = f'{uri}/addresses/first_free/{str(range_id)}/'
allocate_req = requests.post(allocate_uri, data=payload, headers=token, verify=cert) allocate_req = requests.post(allocate_uri, data=payload, headers=token, verify=cert)
allocate_req = allocate_req.json() allocate_req = allocate_req.json()
``` ```
Per the white paper, we'll need to return `ipAllocationId`, `ipAddresses`, `ipRangeId`, and `ipVersion` to vRA in an `AllocationResult`. Once again, I'll leverage the `ipaddress` module for figuring the version (and, once again, I'll add it as an import and to the `requirements.txt` file). Per the white paper, we'll need to return `ipAllocationId`, `ipAddresses`, `ipRangeId`, and `ipVersion` to vRA in an `AllocationResult`. Once again, I'll leverage the `ipaddress` module for figuring the version (and, once again, I'll add it as an import and to the `requirements.txt` file).
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
if allocate_req['success']: if allocate_req['success']:
version = ipaddress.ip_address(allocate_req['data']).version version = ipaddress.ip_address(allocate_req['data']).version
result = { result = {
@ -648,7 +673,8 @@ else:
return result return result
``` ```
I also implemented a hasty `rollback()` in case something goes wrong and we need to undo the allocation: I also implemented a hasty `rollback()` in case something goes wrong and we need to undo the allocation:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
def rollback(allocation_result, bundle): def rollback(allocation_result, bundle):
uri = bundle['uri'] uri = bundle['uri']
token = bundle['token'] token = bundle['token']
@ -663,7 +689,7 @@ def rollback(allocation_result, bundle):
return return
``` ```
The full `allocate_ip` code is [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/allocate_ip/source.py). Once more, run `mvn package -PcollectDependencies -Duser.id=${UID}` and import the new `phpIPAM.zip` package into vRA. You can then open a Cloud Assembly Cloud Template associated with one of the specified networks and hit the "Test" button to see if it works. You should see a new `phpIPAM_AllocateIP` action run appear on the **Extensibility > Action runs** tab. Check the Log for something like this: The full `allocate_ip` code is [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/allocate_ip/source.py). Once more, run `mvn package -PcollectDependencies -Duser.id=${UID}` and import the new `phpIPAM.zip` package into vRA. You can then open a Cloud Assembly Cloud Template associated with one of the specified networks and hit the "Test" button to see if it works. You should see a new `phpIPAM_AllocateIP` action run appear on the **Extensibility > Action runs** tab. Check the Log for something like this:
```log ```
[2021-02-22 01:31:41,729] [INFO] - Querying for auth credentials [2021-02-22 01:31:41,729] [INFO] - Querying for auth credentials
[2021-02-22 01:31:41,757] [INFO] - Credentials obtained successfully! [2021-02-22 01:31:41,757] [INFO] - Credentials obtained successfully!
[2021-02-22 01:31:41,773] [INFO] - Allocating from range 12 [2021-02-22 01:31:41,773] [INFO] - Allocating from range 12
@ -676,7 +702,8 @@ Almost done!
### Step 8: 'Deallocate IP' action ### Step 8: 'Deallocate IP' action
The last step is to remove the IP allocation when a vRA deployment gets destroyed. It starts just like the `allocate_ip` action with our `auth_session()` function and variable initialization: The last step is to remove the IP allocation when a vRA deployment gets destroyed. It starts just like the `allocate_ip` action with our `auth_session()` function and variable initialization:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
def auth_session(uri, auth, cert): def auth_session(uri, auth, cert):
auth_uri = f'{uri}/user/' auth_uri = f'{uri}/user/'
req = requests.post(auth_uri, auth=auth, verify=cert) req = requests.post(auth_uri, auth=auth, verify=cert)
@ -712,7 +739,8 @@ def do_deallocate_ip(self, auth_credentials, cert):
} }
``` ```
And the `deallocate()` function is basically a prettier version of the `rollback()` function from the `allocate_ip` action: And the `deallocate()` function is basically a prettier version of the `rollback()` function from the `allocate_ip` action:
```python {linenos=true} ```python
# torchlight! {"lineNumbers": true}
def deallocate(resource, deallocation, bundle): def deallocate(resource, deallocation, bundle):
uri = bundle['uri'] uri = bundle['uri']
token = bundle['token'] token = bundle['token']
@ -730,13 +758,14 @@ def deallocate(resource, deallocation, bundle):
} }
``` ```
You can review the full code [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/deallocate_ip/source.py). Build the package with Maven, import to vRA, and run another test deployment. The `phpIPAM_DeallocateIP` action should complete successfully. Something like this will be in the log: You can review the full code [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/deallocate_ip/source.py). Build the package with Maven, import to vRA, and run another test deployment. The `phpIPAM_DeallocateIP` action should complete successfully. Something like this will be in the log:
```log ```
[2021-02-22 01:36:29,438] [INFO] - Querying for auth credentials [2021-02-22 01:36:29,438] [INFO] - Querying for auth credentials
[2021-02-22 01:36:29,461] [INFO] - Credentials obtained successfully! [2021-02-22 01:36:29,461] [INFO] - Credentials obtained successfully!
[2021-02-22 01:36:29,476] [INFO] - Deallocating ip 172.16.40.3 from range 12 [2021-02-22 01:36:29,476] [INFO] - Deallocating ip 172.16.40.3 from range 12
``` ```
And the Outputs section of the Details tab will show: And the Outputs section of the Details tab will show:
```json {linenos=true} ```json
// torchlight! {"lineNumbers": true}
{ {
"ipDeallocations": [ "ipDeallocations": [
{ {