mirror of
https://github.com/jbowdre/runtimeterror.git
synced 2024-12-22 19:02:18 +00:00
convert 'info' notices to 'note' notices because they're less ugly
This commit is contained in:
parent
aa82067d31
commit
039cc151d8
14 changed files with 132 additions and 132 deletions
|
@ -21,19 +21,19 @@ $Partition = Get-Volume -DriveLetter C | Get-Partition
|
|||
$Partition | Resize-Partition -Size ($Partition | Get-PartitionSupportedSize).sizeMax
|
||||
```
|
||||
|
||||
It was a bit trickier for Linux systems though. My Linux templates all use LVM to abstract the file systems away from the physical disks, but they may have a different number of physical partitions or different names for the volume groups and logical volumes. So I needed to be able to automagically determine which logical volume was mounted as `/`, which volume group it was a member of, and which partition on which disk is used for that physical volume. I could then expand the physical partition to fill the disk, expand the volume group to fill the now-larger physical volume, grow the logical volume to fill the volume group, and (finally) extend the file system to fill the logical volume.
|
||||
It was a bit trickier for Linux systems though. My Linux templates all use LVM to abstract the file systems away from the physical disks, but they may have a different number of physical partitions or different names for the volume groups and logical volumes. So I needed to be able to automagically determine which logical volume was mounted as `/`, which volume group it was a member of, and which partition on which disk is used for that physical volume. I could then expand the physical partition to fill the disk, expand the volume group to fill the now-larger physical volume, grow the logical volume to fill the volume group, and (finally) extend the file system to fill the logical volume.
|
||||
|
||||
I found a great script [here](https://github.com/alpacacode/Homebrewn-Scripts/blob/master/linux-scripts/partresize.sh) that helped with most of those operations, but it required the user to specify the physical and logical volumes. I modified it to auto-detect those, and here's what I came up with:
|
||||
|
||||
{{% notice info "MBR only" %}}
|
||||
{{% notice note "MBR only" %}}
|
||||
When I cobbled together this script I was primarily targeting the Enterprise Linux (RHEL, CentOS) systems that I work with in my environment, and those happened to have MBR partition tables. This script would need to be modified a bit to work with GPT partitions like you might find on Ubuntu.
|
||||
{{% /notice %}}
|
||||
|
||||
```shell
|
||||
#!/bin/bash
|
||||
# This will attempt to automatically detect the LVM logical volume where / is mounted and then
|
||||
# This will attempt to automatically detect the LVM logical volume where / is mounted and then
|
||||
# expand the underlying physical partition, LVM physical volume, LVM volume group, LVM logical
|
||||
# volume, and Linux filesystem to consume new free space on the disk.
|
||||
# volume, and Linux filesystem to consume new free space on the disk.
|
||||
# Adapted from https://github.com/alpacacode/Homebrewn-Scripts/blob/master/linux-scripts/partresize.sh
|
||||
|
||||
extenddisk() {
|
||||
|
@ -53,7 +53,7 @@ extenddisk() {
|
|||
parted $disk --script unit s print
|
||||
partx -v -a $disk
|
||||
pvresize $pvname
|
||||
lvextend --extents +100%FREE --resize $lvpath
|
||||
lvextend --extents +100%FREE --resize $lvpath
|
||||
echo -e "\n+++New root partition size:+++"
|
||||
df -h / | grep -v Filesystem
|
||||
}
|
||||
|
@ -70,7 +70,7 @@ lvname=$(lvs --noheadings $mountpoint | awk '{print($1)}') # root
|
|||
vgname=$(lvs --noheadings $mountpoint | awk '{print($2)}') # centos
|
||||
lvpath="/dev/${vgname}/${lvname}" # /dev/centos/root
|
||||
pvname=$(pvs | grep $vgname | tail -n1 | awk '{print($1)}') # /dev/sda2
|
||||
disk=$(echo $pvname | rev | cut -c 2- | rev) # /dev/sda
|
||||
disk=$(echo $pvname | rev | cut -c 2- | rev) # /dev/sda
|
||||
diskshort=$(echo $disk | grep -Po '[^\/]+$') # sda
|
||||
partnum=$(echo $pvname | grep -Po '\d$') # 2
|
||||
startsector=$(fdisk -u -l $disk | grep $pvname | awk '{print $2}') # 2099200
|
||||
|
|
|
@ -21,7 +21,7 @@ I wanted to try out the self-hosted setup, and I discovered that the [official d
|
|||
|
||||
I then came across [this comment](https://www.reddit.com/r/Bitwarden/comments/8vmwwe/best_place_to_self_host_bitwarden/e1p2f71/) on Reddit which discussed in somewhat-vague terms the steps required to get BitWarden to run on the [free](https://cloud.google.com/free/docs/always-free-usage-limits#compute_name) `e2-micro` instance, and also introduced me to the community-built [vaultwarden](https://github.com/dani-garcia/vaultwarden) project which is specifically designed to run a BW-compatible server on resource-constrained hardware. So here are the steps I wound up taking to get this up and running.
|
||||
|
||||
{{% notice info "bitwarden_rs -> vaultwarden"%}}
|
||||
{{% notice note "bitwarden_rs -> vaultwarden"%}}
|
||||
When I originally wrote this post back in September 2018, the containerized BitWarden solution was called `bitwarden_rs`. The project [has since been renamed](https://github.com/dani-garcia/vaultwarden/discussions/1642) to `vaultwarden`, and I've since moved to the hosted version of BitWarden. I have attempted to update this article to account for the change but have not personally tested this lately. Good luck, dear reader!
|
||||
{{% /notice %}}
|
||||
|
||||
|
@ -57,7 +57,7 @@ $ sudo vi /etc/ddclient.conf
|
|||
4. `sudo vi /etc/default/ddclient` and make sure that `run_daemon="true"`:
|
||||
|
||||
```shell
|
||||
# Configuration for ddclient scripts
|
||||
# Configuration for ddclient scripts
|
||||
# generated from debconf on Sat Sep 8 21:58:02 UTC 2018
|
||||
#
|
||||
# /etc/default/ddclient
|
||||
|
@ -66,7 +66,7 @@ $ sudo vi /etc/ddclient.conf
|
|||
# from package isc-dhcp-client) updates the systems IP address.
|
||||
run_dhclient="false"
|
||||
|
||||
# Set to "true" if ddclient should be run every time a new ppp connection is
|
||||
# Set to "true" if ddclient should be run every time a new ppp connection is
|
||||
# established. This might be useful, if you are using dial-on-demand.
|
||||
run_ipup="false"
|
||||
|
||||
|
|
|
@ -196,7 +196,7 @@ vagrant destroy
|
|||
### Create a heavy VM, as a treat
|
||||
Having proven to myself that Vagrant does work on a Chromebook, let's see how it does with a slightly-heavier VM.... like [Windows 11](https://app.vagrantup.com/oopsme/boxes/windows11-22h2).
|
||||
|
||||
{{% notice info "Space Requirement" %}}
|
||||
{{% notice note "Space Requirement" %}}
|
||||
Windows 11 makes for a pretty hefty VM which will require significant storage space. My Chromebook's Linux environment ran out of storage space the first time I attempted to deploy this guy. Fortunately ChromeOS makes it easy to allocate more space to Linux (**Settings > Advanced > Developers > Linux development environment > Disk size**). You'll probably need at least 30GB free to provision this VM.
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@ tags:
|
|||
- vpn
|
||||
comment: true # Disable comment if false.
|
||||
---
|
||||
{{% notice info "ESXi-ARM Fling v1.10 Update" %}}
|
||||
{{% notice note "ESXi-ARM Fling v1.10 Update" %}}
|
||||
On July 20, 2022, VMware released a [major update](https://blogs.vmware.com/arm/2022/07/20/1-10/) for the ESXi-ARM Fling. Among [other fixes and improvements](https://flings.vmware.com/esxi-arm-edition#changelog), this version enables **in-place ESXi upgrades** and [adds support for the Quartz64's **on-board NIC**](https://twitter.com/jmcwhatever/status/1549935971822706688). To update, I:
|
||||
1. Wrote the new ISO installer to another USB drive.
|
||||
2. Attached the installer drive to the USB hub, next to the existing ESXi drive.
|
||||
|
@ -225,7 +225,7 @@ The rest of the OVF deployment is basically just selecting the default options a
|
|||
#### Configuring Photon
|
||||
There are just a few things I'll want to configure on this VM before I move on to installing Tailscale, and I'll start out simply by logging in with the remote console.
|
||||
|
||||
{{% notice info "Default credentials" %}}
|
||||
{{% notice note "Default credentials" %}}
|
||||
The default password for Photon's `root` user is `changeme`. You'll be forced to change that at first login.
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -172,7 +172,7 @@ As you can see, Swagger can really help to jump-start the exploration of a new A
|
|||
[^vracloud]: The online version is really intended for the vRealize Automation Cloud hosted solution. It can be a useful reference but some APIs are missing.
|
||||
[^password]: This request form is pure plaintext so you'd never have known that my password is actually `********` if I hadn't mentioned it. Whoops!
|
||||
#### HTTPie
|
||||
[HTTPie](https://httpie.io/) is a handy command-line utility optimized for interacting with web APIs. This will make things easier as I dig deeper.
|
||||
[HTTPie](https://httpie.io/) is a handy command-line utility optimized for interacting with web APIs. This will make things easier as I dig deeper.
|
||||
|
||||
Installing the [Debian package](https://httpie.io/docs/cli/debian-and-ubuntu) is a piece of ~~cake~~ _pie_[^pie]:
|
||||
```shell
|
||||
|
@ -224,7 +224,7 @@ So now if I want to find out which images have been configured in vRA, I can ask
|
|||
```shell
|
||||
https GET vra.lab.bowdre.net/iaas/api/images "Authorization: Bearer $token"
|
||||
```
|
||||
{{% notice info "Request Items" %}}
|
||||
{{% notice note "Request Items" %}}
|
||||
Remember from above that HTTPie will automatically insert key/value pairs separated by a colon into the request header.
|
||||
{{% /notice %}}
|
||||
|
||||
|
@ -305,7 +305,7 @@ And I'll get back some headers followed by an JSON object detailing the defined
|
|||
"totalElements": 2
|
||||
}
|
||||
```
|
||||
This doesn't give me the *name* of the regions, but I could use the `_links.region.href` data to quickly match up images which exist in a given region.[^foreshadowing]
|
||||
This doesn't give me the *name* of the regions, but I could use the `_links.region.href` data to quickly match up images which exist in a given region.[^foreshadowing]
|
||||
|
||||
You'll notice that HTTPie also prettifies the JSON response to make it easy for humans to parse. This is great for experimenting with requests against different API endpoints and getting a feel for what data can be found where. And firing off tests in HTTPie can be a lot quicker (and easier to format) than with other tools.
|
||||
|
||||
|
@ -316,7 +316,7 @@ Now let's take what we've learned and see about implementing it as vRO actions.
|
|||
### vRealize Orchestrator actions
|
||||
My immediate goal for this exercise is create a set of vRealize Orchestrator actions which take in a zone/location identifier from the Cloud Assembly request and return a list of images which are available for deployment there. I'll start with some utility actions to do the heavy lifting, and then I'll be able to call them from other actions as things get more complicated/interesting. Before I can do that, though, I'll need to add the vRA instance as an HTTP REST endpoint in vRO.
|
||||
|
||||
{{% notice info "This post brought to you by..." %}}
|
||||
{{% notice note "This post brought to you by..." %}}
|
||||
A lot of what follows was borrowed *heavily* from a [very helpful post by Oktawiusz Poranski over at Automate Clouds](https://automateclouds.com/2021/vrealize-automation-8-rest-api-how-to/) so be sure to check out that site for more great tips on working with APIs!
|
||||
{{% /notice %}}
|
||||
|
||||
|
@ -342,7 +342,7 @@ I'm going to call this new Configuration `Endpoints` since I plan to use it for
|
|||
|
||||
![Creating the new Configuration](config_element_2.png)
|
||||
|
||||
I'll then click over to the **Variables** tab and create a new variable to store my vRA endpoint details; I'll call it `vRAHost`, and hit the *Type* dropdown and select **New Composite Type**.
|
||||
I'll then click over to the **Variables** tab and create a new variable to store my vRA endpoint details; I'll call it `vRAHost`, and hit the *Type* dropdown and select **New Composite Type**.
|
||||
|
||||
![Creating the new variable](vrahost_variable_1.png)
|
||||
|
||||
|
@ -377,7 +377,7 @@ I'll head into **Library > Actions** to create a new action inside my `com.virtu
|
|||
| `variableName` | `string` | Name of desired variable inside Configuration |
|
||||
|
||||
```javascript
|
||||
/*
|
||||
/*
|
||||
JavaScript: getConfigValue action
|
||||
Inputs: path (string), configurationName (string), variableName (string)
|
||||
Return type: string
|
||||
|
@ -397,7 +397,7 @@ Next, I'll create another action in my `com.virtuallypotato.utility` module whic
|
|||
![vraLogin action](vraLogin_action.png)
|
||||
|
||||
```javascript
|
||||
/*
|
||||
/*
|
||||
JavaScript: vraLogin action
|
||||
Inputs: none
|
||||
Return type: string
|
||||
|
@ -429,7 +429,7 @@ I like to clean up after myself so I'm also going to create a `vraLogout` action
|
|||
| `token` | `string` | Auth token of the session to destroy |
|
||||
|
||||
```javascript
|
||||
/*
|
||||
/*
|
||||
JavaScript: vraLogout action
|
||||
Inputs: token (string)
|
||||
Return type: string
|
||||
|
@ -447,7 +447,7 @@ System.debug("Terminated vRA API session: " + token);
|
|||
```
|
||||
|
||||
##### `vraExecute` action
|
||||
My final "utility" action for this effort will run in between `vraLogin` and `vraLogout`, and it will handle making the actual API call and returning the results. This way I won't have to implement the API handler in every single action which needs to talk to the API - they can just call my new action, `vraExecute`.
|
||||
My final "utility" action for this effort will run in between `vraLogin` and `vraLogout`, and it will handle making the actual API call and returning the results. This way I won't have to implement the API handler in every single action which needs to talk to the API - they can just call my new action, `vraExecute`.
|
||||
|
||||
![vraExecute action](vraExecute_action.png)
|
||||
|
||||
|
@ -485,7 +485,7 @@ return responseContent;
|
|||
```
|
||||
|
||||
##### Bonus: `vraTester` action
|
||||
That's it for the core utility actions - but wouldn't it be great to know that this stuff works before moving on to handling the request input? Enter `vraTester`! It will be handy to have an action I can test vRA REST requests in before going all-in on a solution.
|
||||
That's it for the core utility actions - but wouldn't it be great to know that this stuff works before moving on to handling the request input? Enter `vraTester`! It will be handy to have an action I can test vRA REST requests in before going all-in on a solution.
|
||||
|
||||
This action will:
|
||||
1. Call `vraLogin` to get an API token.
|
||||
|
@ -493,7 +493,7 @@ This action will:
|
|||
3. Call `vraLogout` to terminate the API session.
|
||||
4. Return the data so we can see if it worked.
|
||||
|
||||
Other actions wanting to interact with the vRA REST API will follow the same basic formula, though with some more logic and capability baked in.
|
||||
Other actions wanting to interact with the vRA REST API will follow the same basic formula, though with some more logic and capability baked in.
|
||||
|
||||
Anyway, here's my first swing:
|
||||
```JavaScript
|
||||
|
@ -685,7 +685,7 @@ I'll use the **Debug** button to test this action real quick-like, providing the
|
|||
It works! Well, at least when called directly. Let's see how it does when called from Cloud Assembly.
|
||||
|
||||
### Cloud Assembly request
|
||||
For now I'm really only testing using my new vRO actions so my Cloud Template is going to be pretty basic. I'm not even going to add any resources to the template; I don't even need it to be deployable.
|
||||
For now I'm really only testing using my new vRO actions so my Cloud Template is going to be pretty basic. I'm not even going to add any resources to the template; I don't even need it to be deployable.
|
||||
|
||||
![Completely blank template](blank_template.png)
|
||||
|
||||
|
@ -728,7 +728,7 @@ And I can use the **Test** button at the bottom of the Cloud Assembly template e
|
|||
It does!
|
||||
|
||||
### Conclusion
|
||||
This has been a very quick introduction on how to start pulling data from the vRA APIs, but it (hopefully) helps to consolidate all the knowledge and information I had to find when I started down this path - and maybe it will give you some ideas on how you can use this ability within your own vRA environment.
|
||||
This has been a very quick introduction on how to start pulling data from the vRA APIs, but it (hopefully) helps to consolidate all the knowledge and information I had to find when I started down this path - and maybe it will give you some ideas on how you can use this ability within your own vRA environment.
|
||||
|
||||
In the near future, I'll also have a post on how to do the same sort of things with the vCenter REST API, and I hope to follow that up with a deeper dive on all the tricks I've used to make my request forms as dynamic as possible with the absolute minimum of hardcoded data in the templates. Let me know in the comments if there are any particular use cases you'd like me to explore further.
|
||||
|
||||
|
|
|
@ -38,7 +38,7 @@ I'll be deploying this on a cloud server with these specs:
|
|||
| --- | --- |
|
||||
| Shape | `VM.Standard.A1.Flex` |
|
||||
| Image | Ubuntu 22.04 |
|
||||
| CPU Count | 1 |
|
||||
| CPU Count | 1 |
|
||||
| Memory (GB) | 6 |
|
||||
| Boot Volume (GB) | 50 |
|
||||
|
||||
|
@ -65,7 +65,7 @@ When I bring up the Tailscale interface, I'll use the `--advertise-tags` flag to
|
|||
sudo tailscale up --advertise-tags "tag:cloud"
|
||||
```
|
||||
|
||||
[^tailnet]: [Tailscale's term](https://tailscale.com/kb/1136/tailnet/) for the private network which securely links Tailscale-connected devices.
|
||||
[^tailnet]: [Tailscale's term](https://tailscale.com/kb/1136/tailnet/) for the private network which securely links Tailscale-connected devices.
|
||||
|
||||
#### Install Docker
|
||||
Next I install Docker and `docker-compose`:
|
||||
|
@ -126,15 +126,15 @@ run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
|
|||
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save
|
||||
```
|
||||
|
||||
{{% notice info "Cloud Firewall" %}}
|
||||
{{% notice note "Cloud Firewall" %}}
|
||||
Of course I will also need to create matching rules in the cloud firewall, but I'm going not going to detail [those steps](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#firewall-configuration) again here. And since I've now got Tailscale up and running I can remove the pre-created rule to allow SSH access through the cloud firewall.
|
||||
{{% /notice %}}
|
||||
|
||||
### Install Gitea
|
||||
I'm now ready to move on with installing Gitea itself.
|
||||
I'm now ready to move on with installing Gitea itself.
|
||||
|
||||
#### Prepare `git` user
|
||||
I'll start with creating a `git` user. This account will be set as the owner of the data volume used by the Gitea container, but will also (perhaps more importantly) facilitate [SSH passthrough](https://docs.gitea.io/en-us/install-with-docker/#ssh-container-passthrough) into the container for secure git operations.
|
||||
I'll start with creating a `git` user. This account will be set as the owner of the data volume used by the Gitea container, but will also (perhaps more importantly) facilitate [SSH passthrough](https://docs.gitea.io/en-us/install-with-docker/#ssh-container-passthrough) into the container for secure git operations.
|
||||
|
||||
Here's where I create the account and also generate what will become the SSH key used by the git server:
|
||||
```bash
|
||||
|
@ -153,8 +153,8 @@ When other users add their SSH public keys into Gitea's web UI, those will get a
|
|||
command="/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-1",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty <user pubkey>
|
||||
```
|
||||
|
||||
{{% notice info "Not just yet" %}}
|
||||
No users have added their keys to Gitea just yet so if you look at `/home/git/.ssh/authorized_keys` right now you won't see this extra line, but I wanted to go ahead and mention it to explain the next step. It'll show up later. I promise.
|
||||
{{% notice note "Not just yet" %}}
|
||||
No users have added their keys to Gitea just yet so if you look at `/home/git/.ssh/authorized_keys` right now you won't see this extra line, but I wanted to go ahead and mention it to explain the next step. It'll show up later. I promise.
|
||||
{{% /notice %}}
|
||||
|
||||
So I'll go ahead and create that extra command:
|
||||
|
@ -166,10 +166,10 @@ EOF
|
|||
sudo chmod +x /usr/local/bin/gitea
|
||||
```
|
||||
|
||||
So when I use a `git` command to interact with the server via SSH, the commands will get relayed into the Docker container on port 2222.
|
||||
So when I use a `git` command to interact with the server via SSH, the commands will get relayed into the Docker container on port 2222.
|
||||
|
||||
#### Create `docker-compose` definition
|
||||
That takes care of most of the prep work, so now I'm ready to create the `docker-compose.yaml` file which will tell Docker how to host Gitea.
|
||||
That takes care of most of the prep work, so now I'm ready to create the `docker-compose.yaml` file which will tell Docker how to host Gitea.
|
||||
|
||||
I'm going to place this in `/opt/gitea`:
|
||||
```bash
|
||||
|
@ -251,7 +251,7 @@ services:
|
|||
volumes:
|
||||
- ./postgres:/var/lib/postgresql/data
|
||||
```
|
||||
{{% notice info "Pin the PostgreSQL version" %}}
|
||||
{{% notice note "Pin the PostgreSQL version" %}}
|
||||
The format of PostgreSQL data changes with new releases, and that means that the data created by different major releases are not compatible. Unless you take steps to upgrade the data format, you'll have problems when a new major release of PostgreSQL arrives. Avoid the headache: pin this to a major version (as I did with `image: postgres:14` above) so you can upgrade on your terms.
|
||||
{{% /notice %}}
|
||||
|
||||
|
@ -293,12 +293,12 @@ Starting Gitea is as simple as
|
|||
```bash
|
||||
sudo docker-compose up -d
|
||||
```
|
||||
which will spawn both the Gitea server as well as a `postgres` database to back it.
|
||||
which will spawn both the Gitea server as well as a `postgres` database to back it.
|
||||
|
||||
Gitea will be listening on port `3000`.... which isn't exposed outside of the VM it's running on so I can't actually do anything with it just yet. Let's see about changing that.
|
||||
|
||||
### Configure Caddy reverse proxy
|
||||
I've [written before](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#reverse-proxy-setup) about [Caddy server](https://caddyserver.com/) and how simple it makes creating a reverse proxy with automatic HTTPS. While Gitea does include [built-in HTTPS support](https://docs.gitea.io/en-us/https-setup/), configuring that to work within Docker seems like more work to me.
|
||||
I've [written before](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#reverse-proxy-setup) about [Caddy server](https://caddyserver.com/) and how simple it makes creating a reverse proxy with automatic HTTPS. While Gitea does include [built-in HTTPS support](https://docs.gitea.io/en-us/https-setup/), configuring that to work within Docker seems like more work to me.
|
||||
|
||||
#### Install Caddy
|
||||
So exactly how simple does Caddy make this? Well let's start with installing Caddy on the system:
|
||||
|
@ -334,13 +334,13 @@ sudo systemctl start caddy
|
|||
sudo systemctl restart caddy
|
||||
```
|
||||
|
||||
I found that the `restart` is needed to make sure that the config file gets loaded correctly. And after a moment or two, I can point my browser over to `https://git.bowdre.net` and see the default landing page, complete with a valid certificate.
|
||||
I found that the `restart` is needed to make sure that the config file gets loaded correctly. And after a moment or two, I can point my browser over to `https://git.bowdre.net` and see the default landing page, complete with a valid certificate.
|
||||
|
||||
### Configure Gitea
|
||||
Now that Gitea is installed, I'll need to go through the initial configuration process to actually be able to use it. Fortunately most of this stuff was taken care of by all the environment variables I crammed into the the `docker-compose.yaml` file earlier. All I *really* need to do is create an administrative user:
|
||||
![Initial configuration](initial_config.png)
|
||||
|
||||
I can now press the friendly **Install Gitea** button, and after just a few seconds I'll be able to log in with that new administrator account.
|
||||
I can now press the friendly **Install Gitea** button, and after just a few seconds I'll be able to log in with that new administrator account.
|
||||
|
||||
#### Create user account
|
||||
I don't want to use that account for all my git actions though so I click on the menu at the top right and select the **Site Administration** option:
|
||||
|
@ -383,7 +383,7 @@ Hey - there's my public key, being preceded by the customized command I defined
|
|||
### Configure Fail2ban
|
||||
I'm already limiting this server's exposure by blocking inbound SSH (except for what's magically tunneled through Tailscale) at the Oracle Cloud firewall, but I still have to have TCP ports `80` and `443` open for the web interface. It would be nice if those web ports didn't get hammered with invalid login attempts.
|
||||
|
||||
[Fail2ban](https://www.fail2ban.org/wiki/index.php/Main_Page) can help with that by monitoring log files for repeated authentication failures and then creating firewall rules to block the offender.
|
||||
[Fail2ban](https://www.fail2ban.org/wiki/index.php/Main_Page) can help with that by monitoring log files for repeated authentication failures and then creating firewall rules to block the offender.
|
||||
|
||||
Installing Fail2ban is simple:
|
||||
```shell
|
||||
|
@ -428,7 +428,7 @@ bantime = 86400
|
|||
action = iptables-allports
|
||||
```
|
||||
|
||||
This configures Fail2ban to watch the log file (`logpath`) inside the data volume mounted to the Gitea container for messages which match the pattern I just configured (`gitea`). If a system fails to log in 5 times (`maxretry`) within 1 hour (`findtime`, in seconds) then the offending IP will be banned for 1 day (`bantime`, in seconds).
|
||||
This configures Fail2ban to watch the log file (`logpath`) inside the data volume mounted to the Gitea container for messages which match the pattern I just configured (`gitea`). If a system fails to log in 5 times (`maxretry`) within 1 hour (`findtime`, in seconds) then the offending IP will be banned for 1 day (`bantime`, in seconds).
|
||||
|
||||
Then I just need to enable and start Fail2ban:
|
||||
```shell
|
||||
|
@ -484,7 +484,7 @@ And if I refresh the page in my browser, I'll see all that content which has jus
|
|||
![Populated repo](populated_repo.png)
|
||||
|
||||
### Conclusion
|
||||
So now I've got a lightweight, web-enabled, personal git server running on a (free!) cloud server under my control. It's working brilliantly in conjunction with the community-maintained [obsidian-git](https://github.com/denolehov/obsidian-git) plugin for keeping my notes synced across my various computers. On Android, I'm leveraging the free [GitJournal](https://play.google.com/store/apps/details?id=io.gitjournal.gitjournal) app as a simple git client for pulling the latest changes (as described [on another blog I found](https://orth.uk/obsidian-sync/#clone-the-repo-on-your-android-phone-)).
|
||||
So now I've got a lightweight, web-enabled, personal git server running on a (free!) cloud server under my control. It's working brilliantly in conjunction with the community-maintained [obsidian-git](https://github.com/denolehov/obsidian-git) plugin for keeping my notes synced across my various computers. On Android, I'm leveraging the free [GitJournal](https://play.google.com/store/apps/details?id=io.gitjournal.gitjournal) app as a simple git client for pulling the latest changes (as described [on another blog I found](https://orth.uk/obsidian-sync/#clone-the-repo-on-your-android-phone-)).
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -13,14 +13,14 @@ tags:
|
|||
title: Integrating {php}IPAM with vRealize Automation 8
|
||||
---
|
||||
|
||||
In a [previous post](/vmware-home-lab-on-intel-nuc-9), I described some of the steps I took to stand up a homelab including vRealize Automation (vRA) on an Intel NUC 9. One of my initial goals for that lab was to use it for developing and testing a way for vRA to leverage [phpIPAM](https://phpipam.net/) for static IP assignments. The homelab worked brilliantly for that purpose, and those extra internal networks were a big help when it came to testing. I was able to deploy and configure a new VM to host the phpIPAM instance, install the [VMware vRealize Third-Party IPAM SDK](https://code.vmware.com/web/sdk/1.1.0/vmware-vrealize-automation-third-party-ipam-sdk) on my [Chromebook's Linux environment](/setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications), develop and build the integration component, import it to my vRA environment, and verify that deployments got addressed accordingly.
|
||||
In a [previous post](/vmware-home-lab-on-intel-nuc-9), I described some of the steps I took to stand up a homelab including vRealize Automation (vRA) on an Intel NUC 9. One of my initial goals for that lab was to use it for developing and testing a way for vRA to leverage [phpIPAM](https://phpipam.net/) for static IP assignments. The homelab worked brilliantly for that purpose, and those extra internal networks were a big help when it came to testing. I was able to deploy and configure a new VM to host the phpIPAM instance, install the [VMware vRealize Third-Party IPAM SDK](https://code.vmware.com/web/sdk/1.1.0/vmware-vrealize-automation-third-party-ipam-sdk) on my [Chromebook's Linux environment](/setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications), develop and build the integration component, import it to my vRA environment, and verify that deployments got addressed accordingly.
|
||||
|
||||
The resulting integration is available on Github [here](https://github.com/jbowdre/phpIPAM-for-vRA8). This was actually the second integration I'd worked on, having fumbled my way through a [Solarwinds integration](https://github.com/jbowdre/SWIPAMforvRA8) earlier last year. [VMware's documentation](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-4A5A481C-FC45-47FB-A120-56B73EB28F01.html) on how to build these things is pretty good, but I struggled to find practical information on how a novice like me could actually go about developing the integration. So maybe these notes will be helpful to anyone seeking to write an integration for a different third-party IP Address Management solution.
|
||||
|
||||
If you'd just like to import a working phpIPAM integration into your environment without learning how the sausage is made, you can grab my latest compiled package [here](https://github.com/jbowdre/phpIPAM-for-vRA8/releases/latest). You'll probably still want to look through Steps 0-2 to make sure your IPAM instance is set up similarly to mine.
|
||||
|
||||
### Step 0: phpIPAM installation and base configuration
|
||||
Before even worrying about the SDK, I needed to [get a phpIPAM instance ready](https://phpipam.net/documents/installation/). I started with a small (1vCPU/1GB RAM/16GB HDD) VM attached to my "Home" network (`192.168.1.0/24`). I installed Ubuntu 20.04.1 LTS, and then used [this guide](https://computingforgeeks.com/install-and-configure-phpipam-on-ubuntu-debian-linux/) to install phpIPAM.
|
||||
Before even worrying about the SDK, I needed to [get a phpIPAM instance ready](https://phpipam.net/documents/installation/). I started with a small (1vCPU/1GB RAM/16GB HDD) VM attached to my "Home" network (`192.168.1.0/24`). I installed Ubuntu 20.04.1 LTS, and then used [this guide](https://computingforgeeks.com/install-and-configure-phpipam-on-ubuntu-debian-linux/) to install phpIPAM.
|
||||
|
||||
Once phpIPAM was running and accessible via the web interface, I then used `openssl` to generate a self-signed certificate to be used for the SSL API connection:
|
||||
```shell
|
||||
|
@ -49,7 +49,7 @@ I edited the apache config file to bind that new certificate on port 443, and to
|
|||
SSLCertificateKeyFile /etc/apache2/certificate/apache.key
|
||||
</VirtualHost>
|
||||
```
|
||||
After restarting apache, I verified that hitting `http://ipam.lab.bowdre.net` redirected me to `https://ipam.lab.bowdre.net`, and that the connection was secured with the shiny new certificate.
|
||||
After restarting apache, I verified that hitting `http://ipam.lab.bowdre.net` redirected me to `https://ipam.lab.bowdre.net`, and that the connection was secured with the shiny new certificate.
|
||||
|
||||
Remember how I've got a "Home" network as well as [several internal networks](/vmware-home-lab-on-intel-nuc-9#networking) which only exist inside the lab environment? I dropped the phpIPAM instance on the Home network to make it easy to connect to, but it doesn't know how to talk to the internal networks where vRA will actually be deploying the VMs. So I added a static route to let it know that traffic to `172.16.0.0/16` would have to go through the Vyos router at `192.168.1.100`.
|
||||
|
||||
|
@ -79,9 +79,9 @@ I then ran `sudo netplan apply` so the change would take immediate effect and co
|
|||
```
|
||||
john@ipam:~$ sudo netplan apply
|
||||
john@ipam:~$ ip route
|
||||
default via 192.168.1.1 dev ens160 proto static
|
||||
172.16.0.0/16 via 192.168.1.100 dev ens160 proto static metric 100
|
||||
192.168.1.0/24 dev ens160 proto kernel scope link src 192.168.1.14
|
||||
default via 192.168.1.1 dev ens160 proto static
|
||||
172.16.0.0/16 via 192.168.1.100 dev ens160 proto static metric 100
|
||||
192.168.1.0/24 dev ens160 proto kernel scope link src 192.168.1.14
|
||||
john@ipam:~$ ping 172.16.10.12
|
||||
PING 172.16.10.12 (172.16.10.12) 56(84) bytes of data.
|
||||
64 bytes from 172.16.10.12: icmp_seq=1 ttl=64 time=0.282 ms
|
||||
|
@ -106,7 +106,7 @@ Next, I went to the **Users** item on the left-hand menu to create a new user ac
|
|||
![Creating vRA service account in phpIPAM](DiqyOlf5S.png)
|
||||
![Creating vRA service account in phpIPAM](QoxVKC11t.png)
|
||||
|
||||
The last step in configuring API access is to create an API key. This is done by clicking the **API** item on that left side menu and then selecting *Create API key*. I gave it the app ID `vra`, granted Read/Write permissions, and set the *App Security* option to "SSL with User token".
|
||||
The last step in configuring API access is to create an API key. This is done by clicking the **API** item on that left side menu and then selecting *Create API key*. I gave it the app ID `vra`, granted Read/Write permissions, and set the *App Security* option to "SSL with User token".
|
||||
![Generating the API key](-aPGJhSvz.png)
|
||||
|
||||
Once we get things going, our API calls will authenticate with the username and password to get a token and bind that to the app ID.
|
||||
|
@ -115,7 +115,7 @@ Once we get things going, our API calls will authenticate with the username and
|
|||
Our fancy new IPAM solution is ready to go - except for the whole bit about managing IPs. We need to tell it about the network segments we'd like it to manage. phpIPAM uses "Sections" to group subnets together, so we start by creating a new Section at **Administration > IP related management > Sections**. I named my new section `Lab`, and pretty much left all the default options. Be sure that the `Operators` group has read/write access to this section and the subnets we're going to create inside it!
|
||||
![Creating a section to hold the subnets](6yo39lXI7.png)
|
||||
|
||||
We should also go ahead and create a Nameserver set so that phpIPAM will be able to tell its clients (vRA) what server(s) to use for DNS. Do this at **Administration > IP related management > Nameservers**. I created a new entry called `Lab` and pointed it at my internal DNS server, `192.168.1.5`.
|
||||
We should also go ahead and create a Nameserver set so that phpIPAM will be able to tell its clients (vRA) what server(s) to use for DNS. Do this at **Administration > IP related management > Nameservers**. I created a new entry called `Lab` and pointed it at my internal DNS server, `192.168.1.5`.
|
||||
![Designating the nameserver](pDsEh18bx.png)
|
||||
|
||||
Okay, we're finally ready to start entering our subnets at **Administration > IP related management > Subnets**. For each one, I entered the Subnet in CIDR format, gave it a useful description, and associated it with my `Lab` section. I expanded the *VLAN* dropdown and used the *Add new VLAN* option to enter the corresponding VLAN information, and also selected the Nameserver I had just created.
|
||||
|
@ -123,11 +123,11 @@ Okay, we're finally ready to start entering our subnets at **Administration > IP
|
|||
I also enabled the options ~~*Mark as pool*~~, *Check hosts status*, *Discover new hosts*, and *Resolve DNS names*.
|
||||
![Subnet options](SR7oD0jsG.png)
|
||||
|
||||
{{% notice info "Update" %}}
|
||||
{{% notice note "Update" %}}
|
||||
Since releasing this integration, I've learned that phpIPAM intends for the `isPool` field to identify networks where the entire range (including the subnet and broadcast addresses) are available for assignment. As a result, I no longer recommend using that field. Instead, consider [creating a custom field](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/docs/custom_field.md) for tagging networks for vRA availability.
|
||||
{{% /notice %}}
|
||||
|
||||
I then used the *Scan subnets for new hosts* button to run a discovery scan against the new subnet.
|
||||
I then used the *Scan subnets for new hosts* button to run a discovery scan against the new subnet.
|
||||
![Scanning for new hosts](4WQ8HWJ2N.png)
|
||||
|
||||
The scan only found a single host, `172.16.20.1`, which is the subnet's gateway address hosted by the Vyos router. I used the pencil icon to edit the IP and mark it as the gateway:
|
||||
|
@ -182,7 +182,7 @@ Nice! Let's make it a bit more friendly:
|
|||
>>> subnets = subnets.json()['data']
|
||||
>>> for subnet in subnets:
|
||||
... print("Found subnet: " + subnet['description'])
|
||||
...
|
||||
...
|
||||
Found subnet: Home Network
|
||||
Found subnet: 1610-Management
|
||||
Found subnet: 1620-Servers-1
|
||||
|
@ -192,7 +192,7 @@ Found subnet: 1640-Servers-3
|
|||
Found subnet: 1650-Servers-4
|
||||
Found subnet: 1660-Servers-5
|
||||
```
|
||||
We're in business!
|
||||
We're in business!
|
||||
|
||||
Now that I know how to talk to phpIPAM via its RESP API, it's time to figure out how to get vRA to speak that language.
|
||||
|
||||
|
@ -213,7 +213,7 @@ The README tells you to extract the .zip and make a simple modification to the `
|
|||
<user.id>1000</user.id>
|
||||
</properties>
|
||||
```
|
||||
You can then kick off the build with `mvn package -PcollectDependencies -Duser.id=${UID}`, which will (eventually) spit out `./target/phpIPAM.zip`. You can then [import the package to vRA](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-410899CA-1B02-4507-96AD-DFE622D2DD47.html) and test it against the `httpbin.org` hostname to validate that the build process works correctly.
|
||||
You can then kick off the build with `mvn package -PcollectDependencies -Duser.id=${UID}`, which will (eventually) spit out `./target/phpIPAM.zip`. You can then [import the package to vRA](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-410899CA-1B02-4507-96AD-DFE622D2DD47.html) and test it against the `httpbin.org` hostname to validate that the build process works correctly.
|
||||
|
||||
You'll notice that the form includes fields for Username, Password, and Hostname; we'll also need to specify the API app ID. This can be done by editing `./src/main/resources/endpoint-schema.json`. I added an `apiAppId` field:
|
||||
```json
|
||||
|
@ -292,7 +292,7 @@ You'll notice that the form includes fields for Username, Password, and Hostname
|
|||
}
|
||||
}
|
||||
```
|
||||
{{% notice info "Update" %}}
|
||||
{{% notice note "Update" %}}
|
||||
Check out the [source on GitHub](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/resources/endpoint-schema.json) to see how I adjusted the schema to support custom field input.
|
||||
{{% /notice %}}
|
||||
|
||||
|
@ -359,12 +359,12 @@ try:
|
|||
"statusCode": "200"
|
||||
}
|
||||
```
|
||||
You can view the full code [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/validate_endpoint/source.py).
|
||||
You can view the full code [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/validate_endpoint/source.py).
|
||||
|
||||
After completing each operation, run `mvn package -PcollectDependencies -Duser.id=${UID}` to build again, and then import the package to vRA again. This time, you'll see the new "API App ID" field on the form:
|
||||
![Validating the new IPAM endpoint](bpx8iKUHF.png)
|
||||
|
||||
Confirm that everything worked correctly by hopping over to the **Extensibility** tab, selecting **Action Runs** on the left, and changing the **User Runs** filter to say *Integration Runs*.
|
||||
Confirm that everything worked correctly by hopping over to the **Extensibility** tab, selecting **Action Runs** on the left, and changing the **User Runs** filter to say *Integration Runs*.
|
||||
![Extensibility action runs](e4PTJxfqH.png)
|
||||
Select the newest `phpIPAM_ValidateEndpoint` action and make sure it has a happy green *Completed* status. You can also review the Inputs to make sure they look like what you expected:
|
||||
```json
|
||||
|
@ -381,7 +381,7 @@ Select the newest `phpIPAM_ValidateEndpoint` action and make sure it has a happy
|
|||
"hostName": "ipam.lab.bowdre.net",
|
||||
"properties": "[{\"prop_key\":\"phpIPAM.IPAM.apiAppId\",\"prop_value\":\"vra\"}]",
|
||||
"providerId": "301de00f-d267-4be2-8065-fabf48162dc1",
|
||||
```
|
||||
```
|
||||
And we can see that the Outputs reflect our successful result:
|
||||
```json
|
||||
{
|
||||
|
@ -427,7 +427,7 @@ subnets = subnets.json()['data']
|
|||
```
|
||||
I decided to add the extra `filter_by=isPool&filter_value=1` argument to the query so that it will only return subnets marked as a pool in phpIPAM. This way I can use phpIPAM for monitoring address usage on a much larger set of subnets while only presenting a handful of those to vRA.
|
||||
|
||||
{{% notice info "Update" %}}
|
||||
{{% notice note "Update" %}}
|
||||
I now filter for networks identified by the designated custom field like so:
|
||||
```python
|
||||
# Request list of subnets
|
||||
|
@ -444,7 +444,7 @@ I now filter for networks identified by the designated custom field like so:
|
|||
```
|
||||
{{% /notice %}}
|
||||
|
||||
Now is a good time to consult [that white paper](https://docs.vmware.com/en/VMware-Cloud-services/1.0/ipam_integration_contract_reqs.pdf) to confirm what fields I'll need to return to vRA. That lets me know that I'll need to return `ipRanges` which is a list of `IpRange` objects. `IpRange` requires `id`, `name`, `startIPAddress`, `endIPAddress`, `ipVersion`, and `subnetPrefixLength` properties. It can also accept `description`, `gatewayAddress`, and `dnsServerAddresses` properties, among others. Some of these properties are returned directly by the phpIPAM API, but others will need to be computed on the fly.
|
||||
Now is a good time to consult [that white paper](https://docs.vmware.com/en/VMware-Cloud-services/1.0/ipam_integration_contract_reqs.pdf) to confirm what fields I'll need to return to vRA. That lets me know that I'll need to return `ipRanges` which is a list of `IpRange` objects. `IpRange` requires `id`, `name`, `startIPAddress`, `endIPAddress`, `ipVersion`, and `subnetPrefixLength` properties. It can also accept `description`, `gatewayAddress`, and `dnsServerAddresses` properties, among others. Some of these properties are returned directly by the phpIPAM API, but others will need to be computed on the fly.
|
||||
|
||||
For instance, these are pretty direct matches:
|
||||
```python
|
||||
|
@ -500,7 +500,7 @@ for subnet in subnets:
|
|||
ipRange['dnsServerAddresses'] = [server.strip() for server in str(subnet['nameservers']['namesrv1']).split(';')]
|
||||
except:
|
||||
ipRange['dnsServerAddresses'] = []
|
||||
# try to get the address marked as the gateway in IPAM
|
||||
# try to get the address marked as the gateway in IPAM
|
||||
gw_req = requests.get(f"{subnet_uri}/{subnet['id']}/addresses/?filter_by=is_gateway&filter_value=1", headers=token, verify=cert)
|
||||
if gw_req.status_code == 200:
|
||||
gateway = gw_req.json()['data'][0]['ip']
|
||||
|
@ -515,7 +515,7 @@ return result
|
|||
```
|
||||
The full code can be found [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/get_ip_ranges/source.py). You may notice that I removed all the bits which were in the VMware-provided skeleton about paginating the results. I honestly wasn't entirely sure how to implement that, and I also figured that since I'm already limiting the results by the `is_pool` filter I shouldn't have a problem with the IPAM server returning an overwhelming number of IP ranges. That could be an area for future improvement though.
|
||||
|
||||
In any case, it's time to once again use `mvn package -PcollectDependencies -Duser.id=${UID}` to fire off the build, and then import `phpIPAM.zip` into vRA.
|
||||
In any case, it's time to once again use `mvn package -PcollectDependencies -Duser.id=${UID}` to fire off the build, and then import `phpIPAM.zip` into vRA.
|
||||
|
||||
vRA runs the `phpIPAM_GetIPRanges` action about every ten minutes so keep checking back on the **Extensibility > Action Runs** view until it shows up. You can then select the action and review the Log to see which IP ranges got picked up:
|
||||
```log
|
||||
|
@ -533,7 +533,7 @@ Note that it *did not* pick up my "Home Network" range since it wasn't set to be
|
|||
We can also navigate to **Infrastructure > Networks > IP Ranges** to view them in all their glory:
|
||||
![Reviewing the discovered IP ranges](7_QI-Ti8g.png)
|
||||
|
||||
You can then follow [these instructions](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-410899CA-1B02-4507-96AD-DFE622D2DD47.html) to associate the external IP ranges with networks available for vRA deployments.
|
||||
You can then follow [these instructions](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-410899CA-1B02-4507-96AD-DFE622D2DD47.html) to associate the external IP ranges with networks available for vRA deployments.
|
||||
|
||||
Next, we need to figure out how to allocate an IP.
|
||||
|
||||
|
@ -618,7 +618,7 @@ payload = {
|
|||
'description': f'Reserved by vRA for {owner} at {datetime.now()}'
|
||||
}
|
||||
```
|
||||
That timestamp will be handy when reviewing the reservations from the phpIPAM side of things. Be sure to add an appropriate `import datetime` statement at the top of this file, and include `datetime` in `requirements.txt`.
|
||||
That timestamp will be handy when reviewing the reservations from the phpIPAM side of things. Be sure to add an appropriate `import datetime` statement at the top of this file, and include `datetime` in `requirements.txt`.
|
||||
|
||||
So now we'll construct the URI and post the allocation request to phpIPAM. We tell it which `range_id` to use and it will return the first available IP.
|
||||
```python
|
||||
|
@ -634,7 +634,7 @@ if allocate_req['success']:
|
|||
"ipAllocationId": allocation['id'],
|
||||
"ipRangeId": range_id,
|
||||
"ipVersion": "IPv" + str(version),
|
||||
"ipAddresses": [allocate_req['data']]
|
||||
"ipAddresses": [allocate_req['data']]
|
||||
}
|
||||
logging.info(f"Successfully reserved {str(result['ipAddresses'])} for {vmName}.")
|
||||
else:
|
||||
|
|
|
@ -37,13 +37,13 @@ In the next couple of posts, I'll share the details of how I'm using Terraform t
|
|||
I have definitely learned a ton in the process (and still have a lot more to learn), but today I'll start by describing how I'm leveraging Packer to create a single VM template ready to enter service as a Kubernetes compute node.
|
||||
|
||||
## What's Packer, and why?
|
||||
[HashiCorp Packer](https://www.packer.io/) is a free open-source tool designed to create consistent, repeatable machine images. It's pretty killer as a part of a CI/CD pipeline to kick off new builds based on a schedule or code commits, but also works great for creating builds on-demand. Packer uses the [HashiCorp Configuration Language (HCL)](https://developer.hashicorp.com/packer/docs/templates/hcl_templates) to describe all of the properties of a VM build in a concise and readable format.
|
||||
[HashiCorp Packer](https://www.packer.io/) is a free open-source tool designed to create consistent, repeatable machine images. It's pretty killer as a part of a CI/CD pipeline to kick off new builds based on a schedule or code commits, but also works great for creating builds on-demand. Packer uses the [HashiCorp Configuration Language (HCL)](https://developer.hashicorp.com/packer/docs/templates/hcl_templates) to describe all of the properties of a VM build in a concise and readable format.
|
||||
|
||||
You might ask why I would bother with using a powerful tool like Packer if I'm just going to be building a single template. Surely I could just do that by hand, right? And of course, you'd be right - but using an Infrastructure as Code tool even for one-off builds has some pretty big advantages.
|
||||
You might ask why I would bother with using a powerful tool like Packer if I'm just going to be building a single template. Surely I could just do that by hand, right? And of course, you'd be right - but using an Infrastructure as Code tool even for one-off builds has some pretty big advantages.
|
||||
|
||||
- **It's fast.** Packer is able to build a complete VM (including pulling in all available OS and software updates) in just a few minutes, much faster than I could click through an installer on my own.
|
||||
- **It's consistent.** Packer will follow the exact same steps for every build, removing the small variations (and typos!) that would surely show up if I did the builds manually.
|
||||
- **It's great for testing changes.** Since Packer builds are so fast and consistent, it makes it incredibly easy to test changes as I go. I can be confident that the *only* changes between two builds will be the changes I deliberately introduced.
|
||||
- **It's great for testing changes.** Since Packer builds are so fast and consistent, it makes it incredibly easy to test changes as I go. I can be confident that the *only* changes between two builds will be the changes I deliberately introduced.
|
||||
- **It's self-documenting.** The entire VM (and its guest OS) is described completely within the Packer HCL file(s), which I can review to remember which packages were installed, which user account(s) were created, what partition scheme was used, and anything else I might need to know.
|
||||
- **It supports change tracking.** A Packer build is just a set of HCL files so it's easy to sync them with a version control system like Git to track (and revert) changes as needed.
|
||||
|
||||
|
@ -66,7 +66,7 @@ You can learn how to install Packer on other systems by following [this tutorial
|
|||
Packer will need a user account with sufficient privileges in the vSphere environment to be able to create and manage a VM. I'd recommend using an account dedicated to automation tasks, and assigning it the required privileges listed in [the `vsphere-iso` documentation](https://developer.hashicorp.com/packer/plugins/builders/vsphere/vsphere-iso#required-vsphere-privileges).
|
||||
|
||||
### Gather installation media
|
||||
My Kubernetes node template will use Ubuntu 20.04 LTS as the OS so I'll go ahead and download the [server installer ISO](https://releases.ubuntu.com/20.04.5/) and upload it to a vSphere datastore to make it available to Packer.
|
||||
My Kubernetes node template will use Ubuntu 20.04 LTS as the OS so I'll go ahead and download the [server installer ISO](https://releases.ubuntu.com/20.04.5/) and upload it to a vSphere datastore to make it available to Packer.
|
||||
|
||||
## Template build
|
||||
After the OS is installed and minimimally configured, I'll need to add in Kubernetes components like `containerd`, `kubectl`, `kubelet`, and `kubeadm`, and then apply a few additional tweaks to get it fully ready.
|
||||
|
@ -101,11 +101,11 @@ After quite a bit of experimentation, I've settled on a preferred way to organiz
|
|||
└── variables.pkr.hcl
|
||||
```
|
||||
|
||||
- The `certs` folder holds the Base64-encoded PEM-formatted certificate of my [internal Certificate Authority](/ldaps-authentication-tanzu-community-edition/#prequisite) which will be automatically installed in the provisioned VM's trusted certificate store.
|
||||
- The `certs` folder holds the Base64-encoded PEM-formatted certificate of my [internal Certificate Authority](/ldaps-authentication-tanzu-community-edition/#prequisite) which will be automatically installed in the provisioned VM's trusted certificate store.
|
||||
- The `data` folder stores files for [generating the `cloud-init` configuration](#user-datapkrtplhcl) that will automate the OS installation and configuration.
|
||||
- The `scripts` directory holds a [collection of scripts](#post_install_scripts) used for post-install configuration tasks. Sure, I could just use a single large script, but using a bunch of smaller ones helps keep things modular and easy to reuse elsewhere.
|
||||
- `variables.pkr.hcl` declares [all of the variables](#variablespkrhcl) which will be used in the Packer build, and sets the default values for some of them.
|
||||
- `ubuntu-k8s.auto.pkrvars.hcl` [assigns values](#ubuntu-k8sautopkrvarshcl) to those variables. This is where most of the user-facing options will be configured, such as usernames, passwords, and environment settings.
|
||||
- `ubuntu-k8s.auto.pkrvars.hcl` [assigns values](#ubuntu-k8sautopkrvarshcl) to those variables. This is where most of the user-facing options will be configured, such as usernames, passwords, and environment settings.
|
||||
- `ubuntu-k8s.pkr.hcl` is where the [build process](#ubuntu-k8spkrhcl) is actually described.
|
||||
|
||||
Let's quickly run through that build process, and then I'll back up and examine some other components in detail.
|
||||
|
@ -133,7 +133,7 @@ packer {
|
|||
As I mentioned above, I'll be using the official [`vsphere` plugin](https://github.com/hashicorp/packer-plugin-vsphere) to handle the provisioning on my vSphere environment. I'll also make use of the [`sshkey` plugin](https://github.com/ivoronin/packer-plugin-sshkey) to dynamically generate SSH keys for the build process.
|
||||
|
||||
#### `data` block
|
||||
This section would be used for loading information from various data sources, but I'm only using it for the `sshkey` plugin (as mentioned above).
|
||||
This section would be used for loading information from various data sources, but I'm only using it for the `sshkey` plugin (as mentioned above).
|
||||
```text
|
||||
// BLOCK: data
|
||||
// Defines data sources.
|
||||
|
@ -178,7 +178,7 @@ locals {
|
|||
This block also makes use of the built-in [`templatefile()` function](https://developer.hashicorp.com/packer/docs/templates/hcl_templates/functions/file/templatefile) to insert build-specific variables into the `user-data` file for [`cloud-init`](https://cloud-init.io/) (more on that in a bit).
|
||||
|
||||
#### `source` block
|
||||
The `source` block tells the `vsphere-iso` builder how to connect to vSphere, what hardware specs to set on the VM, and what to do with the VM once the build has finished (convert it to template, export it to OVF, and so on).
|
||||
The `source` block tells the `vsphere-iso` builder how to connect to vSphere, what hardware specs to set on the VM, and what to do with the VM once the build has finished (convert it to template, export it to OVF, and so on).
|
||||
|
||||
You'll notice that most of this is just mapping user-defined variables (with the `var.` prefix) to properties used by `vsphere-iso`:
|
||||
|
||||
|
@ -233,7 +233,7 @@ source "vsphere-iso" "ubuntu-k8s" {
|
|||
cd_content = local.data_source_content
|
||||
cd_label = var.cd_label
|
||||
|
||||
// Boot and Provisioning Settings
|
||||
// Boot and Provisioning Settings
|
||||
boot_order = var.vm_boot_order
|
||||
boot_wait = var.vm_boot_wait
|
||||
boot_command = var.vm_boot_command
|
||||
|
@ -269,7 +269,7 @@ source "vsphere-iso" "ubuntu-k8s" {
|
|||
// OVF Export Settings
|
||||
dynamic "export" {
|
||||
for_each = var.common_ovf_export_enabled == true ? [1] : []
|
||||
content {
|
||||
content {
|
||||
name = var.vm_name
|
||||
force = var.common_ovf_export_overwrite
|
||||
options = [
|
||||
|
@ -311,7 +311,7 @@ build {
|
|||
expect_disconnect = true
|
||||
scripts = var.pre_final_scripts
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
So you can see that the `ubuntu-k8s.pkr.hcl` file primarily focuses on the structure and form of the build, and it's written in such a way that it can be fairly easily adapted for building other types of VMs. Very few things in this file would have to be changed since so many of the properties are derived from the variables.
|
||||
|
@ -721,7 +721,7 @@ variable "k8s_version" {
|
|||
The full `variables.pkr.hcl` can be viewed [here](https://github.com/jbowdre/vsphere-k8s/blob/main/packer/variables.pkr.hcl).
|
||||
|
||||
### `ubuntu-k8s.auto.pkrvars.hcl`
|
||||
Packer automatically knows to load variables defined in files ending in `*.auto.pkrvars.hcl`. Storing the variable values separately from the declarations in `variables.pkr.hcl` makes it easier to protect sensitive values.
|
||||
Packer automatically knows to load variables defined in files ending in `*.auto.pkrvars.hcl`. Storing the variable values separately from the declarations in `variables.pkr.hcl` makes it easier to protect sensitive values.
|
||||
|
||||
So I'll start by telling Packer what credentials to use for connecting to vSphere, and what vSphere resources to deploy to:
|
||||
```text
|
||||
|
@ -829,7 +829,7 @@ ssh_keys = [
|
|||
]
|
||||
```
|
||||
|
||||
Finally, I'll create two lists of scripts that will be run on the VM once the OS install is complete. The `post_install_scripts` will be run immediately after the operating system installation. The `update-packages.sh` script will cause a reboot, and then the set of `pre_final_scripts` will do some cleanup and prepare the VM to be converted to a template.
|
||||
Finally, I'll create two lists of scripts that will be run on the VM once the OS install is complete. The `post_install_scripts` will be run immediately after the operating system installation. The `update-packages.sh` script will cause a reboot, and then the set of `pre_final_scripts` will do some cleanup and prepare the VM to be converted to a template.
|
||||
|
||||
The last bit of this file also designates the desired version of Kubernetes to be installed.
|
||||
```text
|
||||
|
@ -860,7 +860,7 @@ k8s_version = "1.25.3"
|
|||
You can find an full example of this file [here](https://github.com/jbowdre/vsphere-k8s/blob/main/packer/ubuntu-k8s.example.pkrvars.hcl).
|
||||
|
||||
### `user-data.pkrtpl.hcl`
|
||||
Okay, so we've covered the Packer framework that creates the VM; now let's take a quick look at the `cloud-init` configuration that will allow the OS installation to proceed unattended.
|
||||
Okay, so we've covered the Packer framework that creates the VM; now let's take a quick look at the `cloud-init` configuration that will allow the OS installation to proceed unattended.
|
||||
|
||||
See the bits that look `${ like_this }`? Those place-holders will take input from the [`locals` block of `ubuntu-k8s.pkr.hcl`](#locals-block) mentioned above. So that's how all the OS properties will get set, including the hostname, locale, LVM partition layout, username, password, and SSH keys.
|
||||
|
||||
|
@ -1054,7 +1054,7 @@ autoinstall:
|
|||
ssh_authorized_keys:
|
||||
%{ for ssh_key in ssh_keys ~}
|
||||
- ${ ssh_key }
|
||||
%{ endfor ~}
|
||||
%{ endfor ~}
|
||||
%{ endif ~}
|
||||
```
|
||||
|
||||
|
@ -1071,7 +1071,7 @@ This simply holds up the process until the `/var/lib/cloud//instance/boot-finish
|
|||
```shell
|
||||
#!/bin/bash -eu
|
||||
echo '>> Waiting for cloud-init...'
|
||||
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
|
||||
while [ ! -f /var/lib/cloud/instance/boot-finished ]; do
|
||||
sleep 1
|
||||
done
|
||||
```
|
||||
|
@ -1080,12 +1080,12 @@ done
|
|||
Next I clean up any network configs that may have been created during the install process:
|
||||
```shell
|
||||
#!/bin/bash -eu
|
||||
if [ -f /etc/cloud/cloud.cfg.d/99-installer.cfg ]; then
|
||||
if [ -f /etc/cloud/cloud.cfg.d/99-installer.cfg ]; then
|
||||
sudo rm /etc/cloud/cloud.cfg.d/99-installer.cfg
|
||||
echo 'Deleting subiquity cloud-init config'
|
||||
fi
|
||||
|
||||
if [ -f /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg ]; then
|
||||
if [ -f /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg ]; then
|
||||
sudo rm /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg
|
||||
echo 'Deleting subiquity cloud-init network config'
|
||||
fi
|
||||
|
@ -1245,10 +1245,10 @@ Lastly, let's do a final run of cleaning up logs, temporary files, and unique id
|
|||
# Prepare a VM to become a template.
|
||||
|
||||
echo '>> Clearing audit logs...'
|
||||
sudo sh -c 'if [ -f /var/log/audit/audit.log ]; then
|
||||
cat /dev/null > /var/log/audit/audit.log
|
||||
sudo sh -c 'if [ -f /var/log/audit/audit.log ]; then
|
||||
cat /dev/null > /var/log/audit/audit.log
|
||||
fi'
|
||||
sudo sh -c 'if [ -f /var/log/wtmp ]; then
|
||||
sudo sh -c 'if [ -f /var/log/wtmp ]; then
|
||||
cat /dev/null > /var/log/wtmp
|
||||
fi'
|
||||
sudo sh -c 'if [ -f /var/log/lastlog ]; then
|
||||
|
@ -1297,7 +1297,7 @@ Now that all the ducks are nicely lined up, let's give them some marching orders
|
|||
packer packer build -on-error=abort -force .
|
||||
```
|
||||
|
||||
{{% notice info "Flags" %}}
|
||||
{{% notice note "Flags" %}}
|
||||
The `-on-error=abort` option makes sure that the build will abort if any steps in the build fail, and `-force` tells Packer to delete any existing VMs/templates with the same name as the one I'm attempting to build.
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -61,7 +61,7 @@ The [cluster deployment steps](/tanzu-community-edition-k8s-homelab/#management-
|
|||
| Base DN | `OU=LAB,DC=lab,DC=bowdre,DC=net` | DN for OU containing my users |
|
||||
| Filter | `(objectClass=group)` | |
|
||||
| Name Attribute | `cn` | Common Name |
|
||||
| User Attribute | `DN` | Distinguished Name (capitalization matters!) |
|
||||
| User Attribute | `DN` | Distinguished Name (capitalization matters!) |
|
||||
| Group Attribute | `member:1.2.840.113556.1.4.1941:` | Used to enumerate which groups a user is a member of[^member] |
|
||||
|
||||
And I'll copy the contents of the base64-encoded CA certificate I downloaded earlier and paste them into the Root CA Certificate field.
|
||||
|
@ -88,7 +88,7 @@ That `:` at the end of the line will cause problems down the road - specifically
|
|||
```yaml
|
||||
userMatchers:
|
||||
- userAttr: DN
|
||||
groupAttr:
|
||||
groupAttr:
|
||||
member:1.2.840.113556.1.4.1941: null
|
||||
```
|
||||
|
||||
|
@ -153,7 +153,7 @@ tkg-system vsphere-csi Reconcile succeeded 66s 11m
|
|||
|
||||
### Post-deployment tasks
|
||||
|
||||
I've got a TCE cluster now but it's not quite ready for me to authenticate with my AD credentials just yet.
|
||||
I've got a TCE cluster now but it's not quite ready for me to authenticate with my AD credentials just yet.
|
||||
|
||||
#### Load Balancer deployment
|
||||
The [guide I'm following from the TCE site](https://tanzucommunityedition.io/docs/latest/vsphere-ldap-config/) assumes that I'm using NSX-ALB in my environment, but I'm not. So, [as before](/tanzu-community-edition-k8s-homelab/#deploying-kube-vip-as-a-load-balancer), I'll need to deploy [Scott Rosenberg's `kube-vip` Carvel package](https://github.com/vrabbi/tkgm-customizations):
|
||||
|
@ -207,7 +207,7 @@ This overlay will need to be inserted into the `pinniped-addon` secret which mea
|
|||
❯ base64 -w 0 pinniped-supervisor-svc-overlay.yaml
|
||||
I0AgbG9hZCgi[...]==
|
||||
```
|
||||
{{% notice info "Avoid newlines" %}}
|
||||
{{% notice note "Avoid newlines" %}}
|
||||
The `-w 0` / `--wrap=0` argument tells `base64` to *not* wrap the encoded lines after a certain number of characters. If you leave this off, the string will get a newline inserted every 76 characters, and those linebreaks would make the string a bit more tricky to work with. Avoid having to clean up the output afterwards by being more specific with the request up front!
|
||||
{{% /notice %}}
|
||||
|
||||
|
@ -220,14 +220,14 @@ secret/tce-mgmt-pinniped-addon patched
|
|||
I can watch as the `pinniped-supervisor` and `dexsvc` services get updated with the new service type:
|
||||
```bash
|
||||
❯ kubectl get svc -A -w
|
||||
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
|
||||
pinniped-supervisor pinniped-supervisor NodePort 100.65.185.82 <none> 443:31234/TCP
|
||||
tanzu-system-auth dexsvc NodePort 100.70.238.106 <none> 5556:30167/TCP
|
||||
tkg-system packaging-api ClusterIP 100.65.185.94 <none> 443/TCP
|
||||
tanzu-system-auth dexsvc LoadBalancer 100.70.238.106 <pending> 443:30167/TCP
|
||||
pinniped-supervisor pinniped-supervisor LoadBalancer 100.65.185.82 <pending> 443:31234/TCP
|
||||
pinniped-supervisor pinniped-supervisor LoadBalancer 100.65.185.82 192.168.1.70 443:31234/TCP
|
||||
tanzu-system-auth dexsvc LoadBalancer 100.70.238.106 192.168.1.64 443:30167/TCP
|
||||
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
|
||||
pinniped-supervisor pinniped-supervisor NodePort 100.65.185.82 <none> 443:31234/TCP
|
||||
tanzu-system-auth dexsvc NodePort 100.70.238.106 <none> 5556:30167/TCP
|
||||
tkg-system packaging-api ClusterIP 100.65.185.94 <none> 443/TCP
|
||||
tanzu-system-auth dexsvc LoadBalancer 100.70.238.106 <pending> 443:30167/TCP
|
||||
pinniped-supervisor pinniped-supervisor LoadBalancer 100.65.185.82 <pending> 443:31234/TCP
|
||||
pinniped-supervisor pinniped-supervisor LoadBalancer 100.65.185.82 192.168.1.70 443:31234/TCP
|
||||
tanzu-system-auth dexsvc LoadBalancer 100.70.238.106 192.168.1.64 443:30167/TCP
|
||||
```
|
||||
|
||||
I'll also need to restart the `pinniped-post-deploy-job` job to account for the changes I just made; that's accomplished by simply deleting the existing job. After a few minutes a new job will be spawned automagically. I'll just watch for the new job to be created:
|
||||
|
@ -262,7 +262,7 @@ roleRef:
|
|||
apiGroup: rbac.authorization.k8s.io
|
||||
```
|
||||
|
||||
I have a group in Active Directory called `Tanzu-Admins` which contains a group called `vRA-Admins`, and that group contains my user account (`john`). It's a roundabout way of granting access for a single user in this case but it should help to confirm that nested group memberships are being enumerated properly.
|
||||
I have a group in Active Directory called `Tanzu-Admins` which contains a group called `vRA-Admins`, and that group contains my user account (`john`). It's a roundabout way of granting access for a single user in this case but it should help to confirm that nested group memberships are being enumerated properly.
|
||||
|
||||
Once applied, users within that group will be granted the `cluster-admin` role[^roles].
|
||||
|
||||
|
@ -305,7 +305,7 @@ tce-mgmt-md-0-847db9ddc-5bwjs Ready <none> 28h v1.21.5+vm
|
|||
|
||||
So I've now successfully logged in to the management cluster as a non-admin user with my Active Directory credentials. Excellent!
|
||||
|
||||
[^roles]: You can read up on some other default user-facing roles [here](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles).
|
||||
[^roles]: You can read up on some other default user-facing roles [here](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles).
|
||||
|
||||
### Sharing access
|
||||
To allow other users to log in this way, I'd need to give them a copy of the non-admin `kubeconfig`, which I can get by running `tanzu management-cluster config get --export-file tce-mgmt-config` to export it into a file named `tce-mgmt-config`. They could use [whatever method they like](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) to merge this in with their existing `kubeconfig`.
|
||||
|
@ -315,7 +315,7 @@ Other users hoping to work with a Tanzu Community Edition cluster will also need
|
|||
{{% /notice %}}
|
||||
|
||||
### Deploying a workload cluster
|
||||
At this point, I've only configured authentication for the management cluster - not the workload cluster. The TCE community docs cover what's needed to make this configuration available in the workload cluster as well [here](https://tanzucommunityedition.io/docs/latest/vsphere-ldap-config/#configuration-steps-on-the-workload-cluster). [As before](/tanzu-community-edition-k8s-homelab/#workload-cluster), I created the deployment YAML for the workload cluster by copying the management cluster's deployment YAML and changing the `CLUSTER_NAME` and `VSPHERE_CONTROL_PLANE_ENDPOINT` values accordingly. This time I also deleted all of the `LDAP_*` and `OIDC_*` lines, but made sure to preserve the `IDENTITY_MANAGEMENT_TYPE: ldap` one.
|
||||
At this point, I've only configured authentication for the management cluster - not the workload cluster. The TCE community docs cover what's needed to make this configuration available in the workload cluster as well [here](https://tanzucommunityedition.io/docs/latest/vsphere-ldap-config/#configuration-steps-on-the-workload-cluster). [As before](/tanzu-community-edition-k8s-homelab/#workload-cluster), I created the deployment YAML for the workload cluster by copying the management cluster's deployment YAML and changing the `CLUSTER_NAME` and `VSPHERE_CONTROL_PLANE_ENDPOINT` values accordingly. This time I also deleted all of the `LDAP_*` and `OIDC_*` lines, but made sure to preserve the `IDENTITY_MANAGEMENT_TYPE: ldap` one.
|
||||
|
||||
I was then able to deploy the workload cluster with:
|
||||
```bash
|
||||
|
@ -417,7 +417,7 @@ dex-7bf4f5d4d9-k4jfl 1/1 Running 0 40h
|
|||
```
|
||||
|
||||
#### Clearing pinniped sessions
|
||||
I couldn't figure out an elegant way to log out so that I could try authenticating as a different user, but I did discover that information about authenticated sessions get stored in `~/.config/tanzu/pinniped/sessions.yaml`. The sessions expired after a while but until that happens I'm able to keep on interacting with `kubectl` - and not given an option to re-authenticate even if I wanted to.
|
||||
I couldn't figure out an elegant way to log out so that I could try authenticating as a different user, but I did discover that information about authenticated sessions get stored in `~/.config/tanzu/pinniped/sessions.yaml`. The sessions expired after a while but until that happens I'm able to keep on interacting with `kubectl` - and not given an option to re-authenticate even if I wanted to.
|
||||
|
||||
So in lieu of a handy logout option, I was able to remove the cached sessions by deleting the file:
|
||||
```bash
|
||||
|
@ -427,4 +427,4 @@ rm ~/.config/tanzu/pinniped/sessions.yaml
|
|||
That let me use `kubectl get nodes` to trigger the authentication prompt again.
|
||||
|
||||
### Conclusion
|
||||
So this is a pretty basic walkthrough of how I set up my Tanzu Community Edition Kubernetes clusters for Active Directory authentication in my homelab. I feel like I've learned a lot more about TCE specifically and Kubernetes in general through this process, and I'm sure I'll learn more in the future as I keep experimenting with the setup.
|
||||
So this is a pretty basic walkthrough of how I set up my Tanzu Community Edition Kubernetes clusters for Active Directory authentication in my homelab. I feel like I've learned a lot more about TCE specifically and Kubernetes in general through this process, and I'm sure I'll learn more in the future as I keep experimenting with the setup.
|
|
@ -22,7 +22,7 @@ tags:
|
|||
- powercli
|
||||
comment: true # Disable comment if false.
|
||||
---
|
||||
{{% notice info "Fix available" %}}
|
||||
{{% notice note "Fix available" %}}
|
||||
VMware has released a fix for this problem in the form of [ESXi 7.0 Update 3k](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3k-release-notes.html#resolvedissues):
|
||||
> If you already face the issue, after patching the host to ESXi 7.0 Update 3k, just power on the affected Windows Server 2022 VMs. After you patch a host to ESXi 7.0 Update 3k, you can migrate a running Windows Server 2022 VM from a host of version earlier than ESXi 7.0 Update 3k, install KB5022842, and the VM boots properly without any additional steps required.
|
||||
{{% /notice %}}
|
||||
|
|
|
@ -31,8 +31,8 @@ And then I discovered [Tailscale](https://tailscale.com/), which is built on the
|
|||
|
||||
There's already a great write-up (from the source!) on [How Tailscale Works](https://tailscale.com/blog/how-tailscale-works/), and it's really worth a read so I won't rehash it fully here. The tl;dr though is that Tailscale makes securely connecting remote systems incredibly easy, and it lets those systems connect with each other directly ("mesh") rather than needing traffic to go through a single VPN endpoint ("hub-and-spoke"). It uses a centralized coordination server to *coordinate* the complicated key exchanges needed for all members of a Tailscale network (a "[tailnet](https://tailscale.com/kb/1136/tailnet/)") to trust each other, and this removes the need for a human to manually edit configuration files on every existing device just to add a new one to the mix. Tailscale also leverages [magic :tada:](https://tailscale.com/blog/how-nat-traversal-works/) to allow Tailscale nodes to communicate with each other without having to punch holes in firewall configurations or forward ports or anything else tedious and messy. (And in case that the typical NAT traversal techniques don't work out, Tailscale created the Detoured Encrypted Routing Protocol (DERP[^derp]) to make sure Tailscale can still function seamlessly even on extremely restrictive networks that block UDP entirely or otherwise interfere with NAT traversal.)
|
||||
|
||||
{{% notice info "Not a VPN Service" %}}
|
||||
It's a no-brainer solution for remote access, but it's important to note that Tailscale is not a VPN *service*; it won't allow you to internet anonymously or make it appear like you're connecting from a different country (unless you configure a Tailscale Exit Node hosted somewhere in The Cloud to do just that).
|
||||
{{% notice note "Not a VPN Service" %}}
|
||||
It's a no-brainer solution for remote access, but it's important to note that Tailscale is not a VPN *service*; it won't allow you to internet anonymously or make it appear like you're connecting from a different country (unless you configure a Tailscale Exit Node hosted somewhere in The Cloud to do just that).
|
||||
{{% /notice %}}
|
||||
|
||||
Tailscale's software is [open-sourced](https://github.com/tailscale) so you *could* host your own Tailscale control plane and web front end, but much of the appeal of Tailscale is how easy it is to set up and use. To that end, I'm using the Tailscale-hosted option. Tailscale offers a very generous free Personal tier which supports a single admin user, 20 connected devices, 1 subnet router, plus all of the bells and whistles, and the company also sells [Team, Business, and Enterprise plans](https://tailscale.com/pricing/) if you need more users, devices, subnet routers, or additional capabilities[^personal_pro].
|
||||
|
@ -49,7 +49,7 @@ This post will start there but then also expand some of the additional features
|
|||
[^personal_pro]: There's also a reasonably-priced Personal Pro option which comes with 100 devices, 2 routers, and custom auth periods for $48/year. I'm using that since it's less than I was going to spend on WireGuard egress through GCP and I want to support the project in a small way.
|
||||
|
||||
### Getting started
|
||||
The first step in getting up and running with Tailscale is to sign up at [https://login.tailscale.com/start](https://login.tailscale.com/start). You'll need to use an existing Google, Microsoft, or GitHub account to sign up, which also lets you leverage the 2FA and other security protections already enabled on those accounts.
|
||||
The first step in getting up and running with Tailscale is to sign up at [https://login.tailscale.com/start](https://login.tailscale.com/start). You'll need to use an existing Google, Microsoft, or GitHub account to sign up, which also lets you leverage the 2FA and other security protections already enabled on those accounts.
|
||||
|
||||
Once you have a Tailscale account, you're ready to install the Tailscale client. The [download page](https://tailscale.com/download) outlines how to install it on various platforms, and also provides a handy-dandy one-liner to install it on Linux:
|
||||
|
||||
|
@ -208,7 +208,7 @@ By default, Tailscale [expires each node's encryption keys every 180 days](https
|
|||
It's great that all my Tailscale machines can talk to each other directly by their respective Tailscale IP addresses, but who wants to keep up with IPs? I sure don't. Let's do some DNS. I'll start out by clicking on the [DNS](https://login.tailscale.com/admin/dns) tab in the admin console.
|
||||
![The DNS options](dns_tab.png)
|
||||
|
||||
I need to add a Global Nameserver before I can enable MagicDNS so I'll click on the appropriate button to enter in the *Tailscale IP*[^dns_ip] of my home DNS server (which is using [NextDNS](https://nextdns.io/) as the upstream resolver).
|
||||
I need to add a Global Nameserver before I can enable MagicDNS so I'll click on the appropriate button to enter in the *Tailscale IP*[^dns_ip] of my home DNS server (which is using [NextDNS](https://nextdns.io/) as the upstream resolver).
|
||||
![Adding a global name server](add_global_ns.png)
|
||||
|
||||
I'll also enable the toggle to "Override local DNS" to make sure all queries from connected clients are going through this server (and thus extend the NextDNS protection to all clients without having to configure them individually).
|
||||
|
@ -219,7 +219,7 @@ I can also define search domains to be used for unqualified DNS queries by addin
|
|||
|
||||
This will let me resolve hostnames when connected remotely to my lab without having to type the domain suffix (ex, `vcsa` versus `vcsa.lab.bowdre.net`).
|
||||
|
||||
And, finally, I can click the "Enable MagicDNS" button to turn on the magic. This adds a new nameserver with a private Tailscale IP which will resolve Tailscale hostnames to their internal IP addresses.
|
||||
And, finally, I can click the "Enable MagicDNS" button to turn on the magic. This adds a new nameserver with a private Tailscale IP which will resolve Tailscale hostnames to their internal IP addresses.
|
||||
|
||||
![MagicDNS Enabled!](magicdns.png)
|
||||
|
||||
|
@ -237,7 +237,7 @@ I'm going to use three tags in my tailnet:
|
|||
2. `tag:cloud` to identify my cloud servers which will only have access to other cloud servers.
|
||||
3. `tag:client` to identify client-type devices which will be able to access all nodes in the tailnet.
|
||||
|
||||
Before I can actually apply these tags to any of my machines, I first need to define `tagOwners` for each tag which will determine which users (in my organization of one) will be able to use the tags. This is done by editing the policy file available on the [Access Controls](https://login.tailscale.com/admin/acls) tab of the admin console.
|
||||
Before I can actually apply these tags to any of my machines, I first need to define `tagOwners` for each tag which will determine which users (in my organization of one) will be able to use the tags. This is done by editing the policy file available on the [Access Controls](https://login.tailscale.com/admin/acls) tab of the admin console.
|
||||
|
||||
This ACL file uses a format called [HuJSON](https://github.com/tailscale/hujson), which is basically JSON but with support for inline comments and with a bit of leniency when it comes to trailing commas. That makes a config file that is easy for both humans and computers to parse.
|
||||
|
||||
|
@ -349,7 +349,7 @@ And that gets DNS working again for my cloud servers while still serving the res
|
|||
"ports": [
|
||||
"win01:53"
|
||||
]
|
||||
},
|
||||
},
|
||||
{
|
||||
// clients can access everything
|
||||
"action": "accept",
|
||||
|
|
|
@ -24,11 +24,11 @@ tags:
|
|||
- tailscale
|
||||
comment: true # Disable comment if false.
|
||||
---
|
||||
You might remember that I'm a [pretty big fan](/secure-networking-made-simple-with-tailscale/) of [Tailscale](https://tailscale.com), which makes it easy to connect your various devices together in a secure [tailnet](https://tailscale.com/kb/1136/tailnet/), or private network. Tailscale is super simple to set up on most platforms, but you'll need to [install it manually](https://tailscale.com/download/linux/static) if there isn't a prebuilt package for your system.
|
||||
You might remember that I'm a [pretty big fan](/secure-networking-made-simple-with-tailscale/) of [Tailscale](https://tailscale.com), which makes it easy to connect your various devices together in a secure [tailnet](https://tailscale.com/kb/1136/tailnet/), or private network. Tailscale is super simple to set up on most platforms, but you'll need to [install it manually](https://tailscale.com/download/linux/static) if there isn't a prebuilt package for your system.
|
||||
|
||||
Here's a condensed list of the [steps that I took to manually install Tailscale](/esxi-arm-on-quartz64/#installing-tailscale) on VMware's [Photon OS](https://github.com/vmware/photon), though the same (or similar) steps should also work on just about any other `systemd`-based system.
|
||||
|
||||
1. Visit [https://pkgs.tailscale.com/stable/#static](https://pkgs.tailscale.com/stable/#static) to see the latest stable version for your system architecture, and copy the URL. For instance, I'll be using `https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz`.
|
||||
1. Visit [https://pkgs.tailscale.com/stable/#static](https://pkgs.tailscale.com/stable/#static) to see the latest stable version for your system architecture, and copy the URL. For instance, I'll be using `https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz`.
|
||||
2. Download and extract it to the system:
|
||||
```shell
|
||||
wget https://pkgs.tailscale.com/stable/tailscale_1.34.1_arm64.tgz
|
||||
|
@ -50,7 +50,7 @@ sudo systemctl start tailscaled
|
|||
|
||||
From that point, just [`sudo tailscale up`](https://tailscale.com/kb/1080/cli/#up) like normal.
|
||||
|
||||
{{% notice info "Updating Tailscale" %}}
|
||||
{{% notice note "Updating Tailscale" %}}
|
||||
Since Tailscale was installed outside of any package manager, it won't get updated automatically. When new versions are released you'll need to update it manually. To do that:
|
||||
1. Download and extract the new version.
|
||||
2. Install the `tailscale` and `tailscaled` binaries as described above (no need to install the service files again).
|
||||
|
|
|
@ -37,7 +37,7 @@ I've found that the easiest way to do this it to copy it to a datastore which is
|
|||
![Offline bundle stored on the local datastore](bundle_on_datastore.png)
|
||||
|
||||
### 2. Power down VMs
|
||||
The host will need to be in maintenance mode in order to apply the upgrade, and since it's a standalone host it won't enter maintenance mode until all of its VMs have been stopped. This can be easily accomplished through the ESXi embedded host client.
|
||||
The host will need to be in maintenance mode in order to apply the upgrade, and since it's a standalone host it won't enter maintenance mode until all of its VMs have been stopped. This can be easily accomplished through the ESXi embedded host client.
|
||||
|
||||
### 3. Place host in maintenance mode
|
||||
I can do that by SSH'ing to the host and running:
|
||||
|
@ -60,7 +60,7 @@ Name Vendor Acceptance Level Creation Time
|
|||
ESXi-8.0.0-20513097-standard VMware, Inc. PartnerSupported 2022-09-23T18:59:28 2022-09-23T18:59:28
|
||||
ESXi-8.0.0-20513097-no-tools VMware, Inc. PartnerSupported 2022-09-23T18:59:28 2022-09-23T18:59:28
|
||||
```
|
||||
{{% notice info "Absolute paths" %}}
|
||||
{{% notice note "Absolute paths" %}}
|
||||
When using the `esxcli` command to install software/updates, it's important to use absolute paths rather than relative paths. Otherwise you'll get errors and wind up chasing your tail for a while.
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ title: 'vRA8 Custom Provisioning: Part Two'
|
|||
We [last left off this series](/vra8-custom-provisioning-part-one) after I'd set up vRA, performed a test deployment off of a minimal cloud template, and then enhanced the simple template to use vRA tags to let the user specify where a VM should be provisioned. But these VMs have kind of dumb names; right now, they're just getting named after the user who requests it + a random couple of digits, courtesy of a simple [naming template defined on the project's Provisioning page](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-AD400ED7-EB3A-4D36-B9A7-81E100FB3003.html?hWord=N4IghgNiBcIHZgLYEs4HMQF8g):
|
||||
![Naming template](zAF26KJnO.png)
|
||||
|
||||
I could use this naming template to *almost* accomplish what I need from a naming solution, but I don't like that the numbers are random rather than an sequence (I want to deploy `server001` followed by `server002` rather than `server343` followed by `server718`). And it's not enough for me that a VM's name be unique just within the scope of vRA - the hostname should be unique across my entire environment.
|
||||
I could use this naming template to *almost* accomplish what I need from a naming solution, but I don't like that the numbers are random rather than an sequence (I want to deploy `server001` followed by `server002` rather than `server343` followed by `server718`). And it's not enough for me that a VM's name be unique just within the scope of vRA - the hostname should be unique across my entire environment.
|
||||
|
||||
So I'm going to have to get my hands dirty and develop a new solution using vRealize Orchestrator. For right now, it should create a name for a VM that fits a defined naming schema, while also ensuring that the name doesn't already exist within vSphere. (I'll add checks against Active Directory and DNS in the next post.)
|
||||
|
||||
|
@ -30,7 +30,7 @@ For my environment, servers should be named like `BOW-DAPP-WEB001` where:
|
|||
So in vRA's custom naming template syntax, this could look something like:
|
||||
- `${site}-${environment}${function}-${application}${###}`
|
||||
|
||||
Okay, this plan is coming together.
|
||||
Okay, this plan is coming together.
|
||||
|
||||
### Adding more inputs to the cloud template
|
||||
I'll start by adding those fields as inputs on my cloud template.
|
||||
|
@ -202,10 +202,10 @@ Oh yeah, I need to create a thing that will take these naming elements, mash the
|
|||
### Setting up vRO config elements
|
||||
When I first started looking for a naming solution, I found a [really handy blog post from Michael Poore](https://blog.v12n.io/custom-naming-in-vrealize-automation-8x-1/) that described his solution to doing custom naming. I wound up following his general approach but had to adapt it a bit to make the code work in vRO 8 and to add in the additional checks I wanted. So credit to Michael for getting me pointed in the right direction!
|
||||
|
||||
I start by hopping over to the Orchestrator interface and navigating to the Configurations section. I'm going to create a new configuration folder named `CustomProvisioning` that will store all the Configuration Elements I'll use to configure my workflows on this project.
|
||||
I start by hopping over to the Orchestrator interface and navigating to the Configurations section. I'm going to create a new configuration folder named `CustomProvisioning` that will store all the Configuration Elements I'll use to configure my workflows on this project.
|
||||
![Configuration Folder](y7JKSxsqE.png)
|
||||
|
||||
Defining certain variables within configurations separates those from the workflows themselves, making the workflows much more portable. That will allow me to transfer the same code between multiple environments (like my homelab and my lab environment at work) without having to rewrite a bunch of hardcoded values.
|
||||
Defining certain variables within configurations separates those from the workflows themselves, making the workflows much more portable. That will allow me to transfer the same code between multiple environments (like my homelab and my lab environment at work) without having to rewrite a bunch of hardcoded values.
|
||||
|
||||
Now I'll create a new configuration within the new folder. This will hold information about the naming schema so I name it `namingSchema`. In it, I create two strings to define the base naming format (up to the numbers on the end) and full name format (including the numbers). I define `baseFormat` and `nameFormat` as templates based on what I put together earlier.
|
||||
![The namingSchema configuration](zLec-3X_D.png)
|
||||
|
@ -264,7 +264,7 @@ Going back to my VM Provisioning workflow, I drag an Action Element onto the can
|
|||
![image.png](o8CgTjSYm.png)
|
||||
|
||||
#### Event Broker Subscription
|
||||
And at this point I save the workflow. I'm not finished with it - not by a long shot! - but this is a great place to get the workflow plumbed up to vRA and run a quick test. So I go to the vRA interface, hit up the Extensibility tab, and create a new subscription. I name it "VM Provisioning" and set it to fire on the "Compute allocation" event, which will happen right before the VM starts getting created. I link in my VM Provisioning workflow, and also set this as a blocking execution so that no other/future workflows will run until this one completes.
|
||||
And at this point I save the workflow. I'm not finished with it - not by a long shot! - but this is a great place to get the workflow plumbed up to vRA and run a quick test. So I go to the vRA interface, hit up the Extensibility tab, and create a new subscription. I name it "VM Provisioning" and set it to fire on the "Compute allocation" event, which will happen right before the VM starts getting created. I link in my VM Provisioning workflow, and also set this as a blocking execution so that no other/future workflows will run until this one completes.
|
||||
![VM Provisioning subscription](IzaMb39C-.png)
|
||||
|
||||
Alrighty, let's test this and see if it works. I head back to the Design tab and kick off another deployment.
|
||||
|
@ -317,11 +317,11 @@ It creates a new `requestProperties (Properties)` variable to store the limited
|
|||
I'll also drop in a "Foreach Element" item, which will run a linked workflow once for each item in an input array (`originalNames (Array/string)` in this case). I haven't actually created that nested workflow yet so I'm going to skip selecting that for now.
|
||||
![Nested workflow placeholder](UIafeShcv.png)
|
||||
|
||||
The final step of this workflow will be to replace the existing contents of `resourceNames (Array/string)` with the new name.
|
||||
The final step of this workflow will be to replace the existing contents of `resourceNames (Array/string)` with the new name.
|
||||
|
||||
I'll do that with another scriptable task element, named `Apply new names`, which takes `inputProperties (Properties)` and `newNames (Array/string)` as inputs. It will return `resourceNames (Array/string)` as a *workflow output* back to vRA. vRA will see that `resourceNames` has changed and it will update the name of the deployed resource (the VM) accordingly.
|
||||
|
||||
{{% notice info "Binding a workflow output" %}}
|
||||
{{% notice note "Binding a workflow output" %}}
|
||||
To easily create a new workflow output and bind it to a task's output, click the task's **Add New** option like usual:
|
||||
![](add_new.png)
|
||||
Select **Output** at the top of the *New Variable* dialog and the complete the form with the other required details:
|
||||
|
@ -409,7 +409,7 @@ On the connection properties page, I unchecked the per-user connection in favor
|
|||
After successful completion of the workflow, I can go to Administration > Inventory and confirm that the new endpoint is there:
|
||||
![vCenter plugin endpoint](rUmGPdz2I.png)
|
||||
|
||||
I've only got the one vCenter in my lab. At work, I've got multiple vCenters so I would need to repeat these steps to add each of them as an endpoint.
|
||||
I've only got the one vCenter in my lab. At work, I've got multiple vCenters so I would need to repeat these steps to add each of them as an endpoint.
|
||||
|
||||
#### Task: prepare vCenter SDK connection
|
||||
Anyway, back to my "Generate unique hostname" workflow, where I'll add another scriptable task to prepare the vCenter SDK connection. This one doesn't require any inputs, but will output an array of `VC:SdkConnection` objects:
|
||||
|
@ -468,7 +468,7 @@ for (var i in elements) {
|
|||
}
|
||||
}
|
||||
|
||||
// Lookup hostnameBase and increment sequence value
|
||||
// Lookup hostnameBase and increment sequence value
|
||||
try {
|
||||
var attribute = computerNames.getAttributeWithKey(hostnameBase);
|
||||
hostnameSeq = attribute.value;
|
||||
|
@ -517,7 +517,7 @@ System.log("No VM name conflicts found for " + candidateVmName)
|
|||
```
|
||||
|
||||
#### Conflict resolution
|
||||
So what happens if there *is* a naming conflict? This solution wouldn't be very flexible if it just gave up as soon as it encountered a problem. Fortunately, I planned for this - all I need to do in the event of a conflict is to run the `generate hostnameSeq & candidateVmName` task again to increment `hostnameSeq (Number)` by one, use that to create a new `candidateVmName (String)`, and then continue on with the checks.
|
||||
So what happens if there *is* a naming conflict? This solution wouldn't be very flexible if it just gave up as soon as it encountered a problem. Fortunately, I planned for this - all I need to do in the event of a conflict is to run the `generate hostnameSeq & candidateVmName` task again to increment `hostnameSeq (Number)` by one, use that to create a new `candidateVmName (String)`, and then continue on with the checks.
|
||||
|
||||
So far, all of the workflow elements have been connected with happy blue lines which show the flow when everything is going according to the plan. Remember that `errMsg (String)` from the last task? When that gets thrown, the flow will switch to follow an angry dashed red line (if there is one). After dropping a new scriptable task onto the canvas, I can click on the blue line connecting it to the previous item and then click the red X to make it go away.
|
||||
![So long, Blue Line!](BOIwhMxKy.png)
|
||||
|
@ -552,7 +552,7 @@ System.log(" ***** Selecting [" + nextVmName + "] as the next VM name ***** ")
|
|||
```
|
||||
|
||||
#### Task: remove lock
|
||||
And we should also remove that lock that we created at the start of this workflow.
|
||||
And we should also remove that lock that we created at the start of this workflow.
|
||||
![Task: remove lock](BhBnBh8VB.png)
|
||||
|
||||
```js
|
||||
|
@ -570,14 +570,14 @@ Done! Well, mostly. Right now the workflow only actually releases the lock if it
|
|||
I can use a default error handler to capture an abort due to running out of possible names, release the lock (with an exact copy of the `remove lock` task), and return (failed) control back to the parent workflow.
|
||||
![Default error handler](afDacKjVx.png)
|
||||
|
||||
Because the error handler will only fire when the workflow has failed catastrophically, I'll want to make sure the parent workflow knows about it. So I'll set the end mode to "Error, throw an exception" and bind it to that `errMsg (String)` variable to communicate the problem back to the parent.
|
||||
Because the error handler will only fire when the workflow has failed catastrophically, I'll want to make sure the parent workflow knows about it. So I'll set the end mode to "Error, throw an exception" and bind it to that `errMsg (String)` variable to communicate the problem back to the parent.
|
||||
![End Mode](R9d8edeFP.png)
|
||||
|
||||
#### Finalizing the VM Provisioning workflow
|
||||
When I had dropped the foreach workflow item into the VM Provisioning workflow earlier, I hadn't configured anything but the name. Now that the nested workflow is complete, I need to fill in the blanks:
|
||||
![Generate unique hostname](F0IZHRj-J.png)
|
||||
|
||||
So for each item in `originalNames (Array/string)`, this will run the workflow named `Generate unique hostname`. The input to the workflow will be `requestProperties (Properties)`, and the output will be `newNames (Array/string)`.
|
||||
So for each item in `originalNames (Array/string)`, this will run the workflow named `Generate unique hostname`. The input to the workflow will be `requestProperties (Properties)`, and the output will be `newNames (Array/string)`.
|
||||
|
||||
|
||||
### Putting it all together now
|
||||
|
@ -609,6 +609,6 @@ And, finally, I can go back to vRA and request a new VM and confirm that the nam
|
|||
It's so beautiful!
|
||||
|
||||
### Wrap-up
|
||||
At this point, I'm tired of typing and I'm sure you're tired of reading. In the next installment, I'll go over how I modify this workflow to also check for naming conflicts in Active Directory and DNS. That sounds like it should be pretty simple but, well, you'll see.
|
||||
At this point, I'm tired of typing and I'm sure you're tired of reading. In the next installment, I'll go over how I modify this workflow to also check for naming conflicts in Active Directory and DNS. That sounds like it should be pretty simple but, well, you'll see.
|
||||
|
||||
See you then!
|
Loading…
Reference in a new issue