import more posts from vpotato
45
content/post/accessing-tce-cluster-from-new-device/index.md
Normal file
|
@ -0,0 +1,45 @@
|
||||||
|
---
|
||||||
|
title: "Accessing a Tanzu Community Edition Kubernetes Cluster from a new device" # Title of the blog post.
|
||||||
|
date: 2022-02-01T10:58:57-06:00 # Date of post creation.
|
||||||
|
# lastmod: 2022-02-01T10:58:57-06:00 # Date when last modified
|
||||||
|
description: "The Tanzu Community Edition documentation does a great job of explaining how to authenticate to a newly-deployed cluster at the tail end of the installation steps, but how do you log in from another system?" # Description used for search engine.
|
||||||
|
featured: false # Sets if post is a featured post, making appear on the home page side bar.
|
||||||
|
draft: true # Sets whether to render this page. Draft of true will not be rendered.
|
||||||
|
toc: false # Controls if a table of contents should be generated for first-level links automatically.
|
||||||
|
usePageBundles: true
|
||||||
|
# menu: main
|
||||||
|
# featureImage: "file.png" # Sets featured image on blog post.
|
||||||
|
# featureImageAlt: 'Description of image' # Alternative text for featured image.
|
||||||
|
# featureImageCap: 'This is the featured image.' # Caption (optional).
|
||||||
|
# thumbnail: "thumbnail.png" # Sets thumbnail image appearing inside card on homepage.
|
||||||
|
# shareImage: "share.png" # Designate a separate image for social media sharing.
|
||||||
|
codeLineNumbers: false # Override global value for showing of line numbers within code block.
|
||||||
|
series: Tips
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- kubernetes
|
||||||
|
- tanzu
|
||||||
|
comment: true # Disable comment if false.
|
||||||
|
---
|
||||||
|
When I [recently set up my Tanzu Community Edition environment](/tanzu-community-edition-k8s-homelab/), I did so from a Linux VM since I knew that my Chromebook Linux environment wouldn't support the `kind` bootstrap cluster used for the deployment. But now I'd like to be able to connect to the cluster directly using the `tanzu` and `kubectl` CLI tools. How do I get the appropriate cluster configuration over to my Chromebook?
|
||||||
|
|
||||||
|
The Tanzu CLI actually makes that pretty easy. I just run these commands on my Linux VM to export the `kubeconfig` of my management (`tce-mgmt`) and workload (`tce-work`) clusters to a pair of files:
|
||||||
|
```shell
|
||||||
|
tanzu management-cluster kubeconfig get --admin --export-file tce-mgmt-kubeconfig.yaml
|
||||||
|
tanzu cluster kubeconfig get tce-work --admin --export-file tce-work-kubeconfig.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
I could then use `scp` to pull the files from the VM into my local Linux environment. I then needed to [install `kubectl`](/tanzu-community-edition-k8s-homelab/#kubectl-binary) and the [`tanzu` CLI](/tanzu-community-edition-k8s-homelab/#tanzu-cli) (making sure to also [enable shell auto-completion](/enable-tanzu-cli-auto-completion-bash-zsh/) along the way!), and I could import the configurations locally:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
❯ tanzu login --kubeconfig tce-mgmt-kubeconfig.yaml --context tce-mgmt-admin@tce-mgmt --name tce-mgmt
|
||||||
|
✔ successfully logged in to management cluster using the kubeconfig tce-mgmt
|
||||||
|
|
||||||
|
❯ tanzu login --kubeconfig tce-work-kubeconfig.yaml --context tce-work-admin@tce-work --name tce-work
|
||||||
|
✔ successfully logged in to management cluster using the kubeconfig tce-work
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
After Width: | Height: | Size: 119 KiB |
After Width: | Height: | Size: 134 KiB |
After Width: | Height: | Size: 180 KiB |
After Width: | Height: | Size: 70 KiB |
After Width: | Height: | Size: 214 KiB |
After Width: | Height: | Size: 53 KiB |
After Width: | Height: | Size: 17 KiB |
After Width: | Height: | Size: 41 KiB |
After Width: | Height: | Size: 62 KiB |
|
@ -0,0 +1,134 @@
|
||||||
|
---
|
||||||
|
series: vRA8
|
||||||
|
date: "2021-06-01T08:34:30Z"
|
||||||
|
thumbnail: -Fuvz-GmF.png
|
||||||
|
usePageBundles: true
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- vra
|
||||||
|
- vro
|
||||||
|
- javascript
|
||||||
|
title: Adding VM Notes and Custom Attributes with vRA8
|
||||||
|
---
|
||||||
|
|
||||||
|
*In [past posts](/series/vra8), I started by [creating a basic deployment infrastructure](/vra8-custom-provisioning-part-one) in Cloud Assembly and using tags to group those resources. I then [wrote an integration](/integrating-phpipam-with-vrealize-automation-8) to let vRA8 use phpIPAM for static address assignments. I [implemented a vRO workflow](/vra8-custom-provisioning-part-two) for generating unique VM names which fit an organization's established naming standard, and then [extended the workflow](/vra8-custom-provisioning-part-three) to avoid any naming conflicts in Active Directory and DNS. And, finally, I [created an intelligent provisioning request form in Service Broker](/vra8-custom-provisioning-part-four) to make it easy for users to get the servers they need. That's got the core functionality pretty well sorted, so moving forward I'll be detailing additions that enable new capabilities and enhance the experience.*
|
||||||
|
|
||||||
|
In this post, I'll describe how to get certain details from the Service Broker request form and into the VM's properties in vCenter. The obvious application of this is adding descriptive notes so I can remember what purpose a VM serves, but I will also be using [Custom Attributes](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-73606C4C-763C-4E27-A1DA-032E4C46219D.html) to store the server's Point of Contact information and a record of which ticketing system request resulted in the server's creation.
|
||||||
|
|
||||||
|
### New inputs
|
||||||
|
I'll start this by adding a few new inputs to the cloud template in Cloud Assembly.
|
||||||
|
![New inputs in Cloud Assembly](F3Wkd3VT.png)
|
||||||
|
|
||||||
|
I'm using a basic regex on the `poc_email` field to make sure that the user's input is *probably* a valid email address in the format `[some string]@[some string].[some string]`.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
inputs:
|
||||||
|
[...]
|
||||||
|
description:
|
||||||
|
type: string
|
||||||
|
title: Description
|
||||||
|
description: Server function/purpose
|
||||||
|
default: Testing and evaluation
|
||||||
|
poc_name:
|
||||||
|
type: string
|
||||||
|
title: Point of Contact Name
|
||||||
|
default: Jack Shephard
|
||||||
|
poc_email:
|
||||||
|
type: string
|
||||||
|
title: Point of Contact Email
|
||||||
|
default: jack.shephard@virtuallypotato.com
|
||||||
|
pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$'
|
||||||
|
ticket:
|
||||||
|
type: string
|
||||||
|
title: Ticket/Request Number
|
||||||
|
default: 4815162342
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll also need to add these to the `resources` section of the template so that they will get passed along with the deployment properties.
|
||||||
|
![New resource properties](N7YllJkxS.png)
|
||||||
|
|
||||||
|
I'm actually going to combine the `poc_name` and `poc_email` fields into a single `poc` string.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
<...>
|
||||||
|
poc: '${input.poc_name + " (" + input.poc_email + ")"}'
|
||||||
|
ticket: '${input.ticket}'
|
||||||
|
description: '${input.description}'
|
||||||
|
<...>
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll save this as a new version so that the changes will be available in the Service Broker front-end.
|
||||||
|
![New template version](Z2aKLsLou.png)
|
||||||
|
|
||||||
|
### Service Broker custom form
|
||||||
|
I can then go to Service Broker and drag the new fields onto the Custom Form canvas. (If the new fields don't show up, hit up the Content Sources section of Service Broker, select the content source, and click the "Save and Import" button to sync the changes.) While I'm at it, I set the Description field to display as a text area (encouraging more detailed input), and I also set all the fields on the form to be required.
|
||||||
|
![Service Broker form](unhgNySSzz.png)
|
||||||
|
|
||||||
|
### vRO workflow
|
||||||
|
Okay, so I've got the information I want to pass on to vCenter. Now I need to whip up a new workflow in vRO that will actually do that (after [telling vRO how to connect to the vCenter](/vra8-custom-provisioning-part-two#interlude-connecting-vro-to-vcenter), of course). I'll want to call this after the VM has been provisioned, so I'll cleverly call the workflow "VM Post-Provisioning".
|
||||||
|
![Naming the new workflow](X9JhgWx8x.png)
|
||||||
|
|
||||||
|
The workflow will have a single input from vRA, `inputProperties` of type `Properties`.
|
||||||
|
![Workflow input](zHrp6GPcP.png)
|
||||||
|
|
||||||
|
The first thing this workflow needs to do is parse `inputProperties (Properties)` to get the name of the VM, and it will then use that information to query vCenter and grab the corresponding VM object. So I'll add a scriptable task item to the workflow canvas and call it `Get VM Object`. It will take `inputProperties (Properties)` as its sole input, and output a new variable called `vm` of type `VC:VirtualMachine`.
|
||||||
|
![Get VM Object action](5ATk99aPW.png)
|
||||||
|
|
||||||
|
The script for this task is fairly straightforward:
|
||||||
|
```js
|
||||||
|
// JavaScript: Get VM Object
|
||||||
|
// Inputs: inputProperties (Properties)
|
||||||
|
// Outputs: vm (VC:VirtualMachine)
|
||||||
|
|
||||||
|
var name = inputProperties.resourceNames[0]
|
||||||
|
|
||||||
|
var vms = VcPlugin.getAllVirtualMachines(null, name)
|
||||||
|
System.log("Found VM object: " + vms[0])
|
||||||
|
vm = vms[0]
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll add another scriptable task item to the workflow to actually apply the notes to the VM - I'll call it `Set Notes`, and it will take both `vm (VC:VirtualMachine)` and `inputProperties (Properties)` as its inputs.
|
||||||
|
![Set Notes action](w24V6YVOR.png)
|
||||||
|
|
||||||
|
The first part of the script creates a new VM config spec, inserts the description into the spec, and then reconfigures the selected VM with the new spec.
|
||||||
|
|
||||||
|
The second part uses a built-in action to set the `Point of Contact` and `Ticket` custom attributes accordingly.
|
||||||
|
|
||||||
|
```js
|
||||||
|
// Javascript: Set Notes
|
||||||
|
// Inputs: vm (VC:VirtualMachine), inputProperties (Properties)
|
||||||
|
// Outputs: None
|
||||||
|
|
||||||
|
var notes = inputProperties.customProperties.description
|
||||||
|
var poc = inputProperties.customProperties.poc
|
||||||
|
var ticket = inputProperties.customProperties.ticket
|
||||||
|
|
||||||
|
var spec = new VcVirtualMachineConfigSpec()
|
||||||
|
spec.annotation = notes
|
||||||
|
vm.reconfigVM_Task(spec)
|
||||||
|
|
||||||
|
System.getModule("com.vmware.library.vc.customattribute").setOrCreateCustomField(vm,"Point of Contact", poc)
|
||||||
|
System.getModule("com.vmware.library.vc.customattribute").setOrCreateCustomField(vm,"Ticket", ticket)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Extensibility subscription
|
||||||
|
Now I need to return to Cloud Assembly and create a new extensibility subscription that will call this new workflow at the appropriate time. I'll call it "VM Post-Provisioning" and attach it to the "Compute Post Provision" topic.
|
||||||
|
![Creating the new subscription](PmhVOWJsUn.png)
|
||||||
|
|
||||||
|
And then I'll link it to my new workflow:
|
||||||
|
![Selecting the workflow](cEbWSOg00.png)
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
And then back to Service Broker to request a VM and see if it works:
|
||||||
|
|
||||||
|
![Test request](Lq9DBCK_Y.png)
|
||||||
|
|
||||||
|
It worked!
|
||||||
|
![New VM with notes](-Fuvz-GmF.png)
|
||||||
|
|
||||||
|
In the future, I'll be exploring more features that I can add on to this "VM Post-Provisioning" workflow like creating static DNS records as needed.
|
After Width: | Height: | Size: 165 KiB |
After Width: | Height: | Size: 157 KiB |
After Width: | Height: | Size: 43 KiB |
After Width: | Height: | Size: 562 KiB |
|
@ -0,0 +1,98 @@
|
||||||
|
---
|
||||||
|
series: Scripts
|
||||||
|
date: "2021-04-29T08:34:30Z"
|
||||||
|
usePageBundles: true
|
||||||
|
thumbnail: 20210723-script.png
|
||||||
|
tags:
|
||||||
|
- linux
|
||||||
|
- shell
|
||||||
|
- automation
|
||||||
|
title: Automatic unattended expansion of Linux root LVM volume to fill disk
|
||||||
|
toc: false
|
||||||
|
---
|
||||||
|
|
||||||
|
While working on my [vRealize Automation 8 project](/series/vra8), I wanted to let users specify how large a VM's system drive should be and have vRA apply that without any further user intervention. For instance, if the template has a 60GB C: drive and the user specifies that they want it to be 80GB, vRA will embiggen the new VM's VMDK to 80GB and then expand the guest file system to fill up the new free space.
|
||||||
|
|
||||||
|
I'll get into the details of how that's implemented from the vRA side #soon, but first I needed to come up with simple scripts to extend the guest file system to fill the disk.
|
||||||
|
|
||||||
|
This was pretty straight-forward on Windows with a short PowerShell script to grab the appropriate volume and resize it to its full capacity:
|
||||||
|
```powershell
|
||||||
|
$Partition = Get-Volume -DriveLetter C | Get-Partition
|
||||||
|
$Partition | Resize-Partition -Size ($Partition | Get-PartitionSupportedSize).sizeMax
|
||||||
|
```
|
||||||
|
|
||||||
|
It was a bit trickier for Linux systems though. My Linux templates all use LVM to abstract the file systems away from the physical disks, but they may have a different number of physical partitions or different names for the volume groups and logical volumes. So I needed to be able to automagically determine which logical volume was mounted as `/`, which volume group it was a member of, and which partition on which disk is used for that physical volume. I could then expand the physical partition to fill the disk, expand the volume group to fill the now-larger physical volume, grow the logical volume to fill the volume group, and (finally) extend the file system to fill the logical volume.
|
||||||
|
|
||||||
|
I found a great script [here](https://github.com/alpacacode/Homebrewn-Scripts/blob/master/linux-scripts/partresize.sh) that helped with most of those operations, but it required the user to specify the physical and logical volumes. I modified it to auto-detect those, and here's what I came up with:
|
||||||
|
|
||||||
|
{{% notice info "MBR only" %}}
|
||||||
|
When I cobbled together this script I was primarily targeting the Enterprise Linux (RHEL, CentOS) systems that I work with in my environment, and those happened to have MBR partition tables. This script would need to be modified a bit to work with GPT partitions like you might find on Ubuntu.
|
||||||
|
{{% /notice %}}
|
||||||
|
|
||||||
|
```shell
|
||||||
|
#!/bin/bash
|
||||||
|
# This will attempt to automatically detect the LVM logical volume where / is mounted and then
|
||||||
|
# expand the underlying physical partition, LVM physical volume, LVM volume group, LVM logical
|
||||||
|
# volume, and Linux filesystem to consume new free space on the disk.
|
||||||
|
# Adapted from https://github.com/alpacacode/Homebrewn-Scripts/blob/master/linux-scripts/partresize.sh
|
||||||
|
|
||||||
|
extenddisk() {
|
||||||
|
echo -e "\n+++Current partition layout of $disk:+++"
|
||||||
|
parted $disk --script unit s print
|
||||||
|
if [ $logical == 1 ]; then
|
||||||
|
parted $disk --script rm $ext_partnum
|
||||||
|
parted $disk --script "mkpart extended ${ext_startsector}s -1s"
|
||||||
|
parted $disk --script "set $ext_partnum lba off"
|
||||||
|
parted $disk --script "mkpart logical ext2 ${startsector}s -1s"
|
||||||
|
else
|
||||||
|
parted $disk --script rm $partnum
|
||||||
|
parted $disk --script "mkpart primary ext2 ${startsector}s -1s"
|
||||||
|
fi
|
||||||
|
parted $disk --script set $partnum lvm on
|
||||||
|
echo -e "\n\n+++New partition layout of $disk:+++"
|
||||||
|
parted $disk --script unit s print
|
||||||
|
partx -v -a $disk
|
||||||
|
pvresize $pvname
|
||||||
|
lvextend --extents +100%FREE --resize $lvpath
|
||||||
|
echo -e "\n+++New root partition size:+++"
|
||||||
|
df -h / | grep -v Filesystem
|
||||||
|
}
|
||||||
|
export LVM_SUPPRESS_FD_WARNINGS=1
|
||||||
|
mountpoint=$(df --output=source / | grep -v Filesystem) # /dev/mapper/centos-root
|
||||||
|
lvdisplay $mountpoint > /dev/null
|
||||||
|
if [ $? != 0 ]; then
|
||||||
|
echo "Error: $mountpoint does not look like a LVM logical volume. Aborting."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
echo -e "\n+++Current root partition size:+++"
|
||||||
|
df -h / | grep -v Filesystem
|
||||||
|
lvname=$(lvs --noheadings $mountpoint | awk '{print($1)}') # root
|
||||||
|
vgname=$(lvs --noheadings $mountpoint | awk '{print($2)}') # centos
|
||||||
|
lvpath="/dev/${vgname}/${lvname}" # /dev/centos/root
|
||||||
|
pvname=$(pvs | grep $vgname | tail -n1 | awk '{print($1)}') # /dev/sda2
|
||||||
|
disk=$(echo $pvname | rev | cut -c 2- | rev) # /dev/sda
|
||||||
|
diskshort=$(echo $disk | grep -Po '[^\/]+$') # sda
|
||||||
|
partnum=$(echo $pvname | grep -Po '\d$') # 2
|
||||||
|
startsector=$(fdisk -u -l $disk | grep $pvname | awk '{print $2}') # 2099200
|
||||||
|
layout=$(parted $disk --script unit s print) # Model: VMware Virtual disk (scsi) Disk /dev/sda: 83886080s Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 2048s 2099199s 2097152s primary xfs boot 2 2099200s 62914559s 60815360s primary lvm
|
||||||
|
if grep -Pq "^\s$partnum\s+.+?logical.+$" <<< "$layout"; then
|
||||||
|
logical=1
|
||||||
|
ext_partnum=$(parted $disk --script unit s print | grep extended | grep -Po '^\s\d\s' | tr -d ' ')
|
||||||
|
ext_startsector=$(parted $disk --script unit s print | grep extended | awk '{print $2}' | tr -d 's')
|
||||||
|
else
|
||||||
|
logical=0
|
||||||
|
fi
|
||||||
|
parted $disk --script unit s print | if ! grep -Pq "^\s$partnum\s+.+?[^,]+?lvm\s*$"; then
|
||||||
|
echo -e "Error: $pvname seems to have some flags other than 'lvm' set."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
if ! (fdisk -u -l $disk | grep $disk | tail -1 | grep $pvname | grep -q "Linux LVM"); then
|
||||||
|
echo -e "Error: $pvname is not the last LVM volume on disk $disk."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
ls /sys/class/scsi_device/*/device/rescan | while read path; do echo 1 > $path; done
|
||||||
|
ls /sys/class/scsi_host/host*/scan | while read path; do echo "- - -" > $path; done
|
||||||
|
extenddisk
|
||||||
|
```
|
||||||
|
|
||||||
|
And it works beautifully within my environment. Hopefully it'll work for yours too in case you have a similar need!
|
After Width: | Height: | Size: 828 KiB |
After Width: | Height: | Size: 648 KiB |
After Width: | Height: | Size: 308 KiB |
After Width: | Height: | Size: 1.3 MiB |
After Width: | Height: | Size: 1.1 MiB |
After Width: | Height: | Size: 238 KiB |
After Width: | Height: | Size: 1.2 MiB |
|
@ -0,0 +1,620 @@
|
||||||
|
---
|
||||||
|
title: "Bulk Import vSphere dvPortGroups to phpIPAM" # Title of the blog post.
|
||||||
|
date: 2022-02-04 # Date of post creation.
|
||||||
|
# lastmod: 2022-01-21T15:24:00-06:00 # Date when last modified
|
||||||
|
description: "I wrote a Python script to interface with the phpIPAM API and import a large number of networks exported from vSphere for IP management." # Description used for search engine.
|
||||||
|
featured: false # Sets if post is a featured post, making appear on the home page side bar.
|
||||||
|
draft: false # Sets whether to render this page. Draft of true will not be rendered.
|
||||||
|
toc: true # Controls if a table of contents should be generated for first-level links automatically.
|
||||||
|
usePageBundles: true
|
||||||
|
# menu: main
|
||||||
|
# featureImage: "file.png" # Sets featured image on blog post.
|
||||||
|
# featureImageAlt: 'Description of image' # Alternative text for featured image.
|
||||||
|
# featureImageCap: 'This is the featured image.' # Caption (optional).
|
||||||
|
thumbnail: "code.png" # Sets thumbnail image appearing inside card on homepage.
|
||||||
|
# shareImage: "share.png" # Designate a separate image for social media sharing.
|
||||||
|
codeLineNumbers: false # Override global value for showing of line numbers within code block.
|
||||||
|
series: Scripts
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- powercli
|
||||||
|
- python
|
||||||
|
- api
|
||||||
|
- phpipam
|
||||||
|
comment: true # Disable comment if false.
|
||||||
|
---
|
||||||
|
|
||||||
|
I [recently wrote](/tanzu-community-edition-k8s-homelab/#a-real-workload---phpipam) about getting started with VMware's [Tanzu Community Edition](https://tanzucommunityedition.io/) and deploying [phpIPAM](https://phpipam.net/) as my first real-world Kubernetes workload. Well I've spent much of my time since then working on a script which would help to populate my phpIPAM instance with a list of networks to monitor.
|
||||||
|
|
||||||
|
### Planning and Exporting
|
||||||
|
The first step in making this work was to figure out which networks I wanted to import. We've got hundreds of different networks in use across our production vSphere environments. I focused only on those which are portgroups on distributed virtual switches since those configurations are pretty standardized (being vCenter constructs instead of configured on individual hosts). These dvPortGroups bear a naming standard which conveys all sorts of useful information, and it's easy and safe to rename any dvPortGroups which _don't_ fit the standard (unlike renaming portgroups on a standard virtual switch).
|
||||||
|
|
||||||
|
The standard naming convention is `[Site/Description] [Network Address]{/[Mask]}`. So the networks (across two virtual datacenters and two dvSwitches) look something like this:
|
||||||
|
![Production dvPortGroups approximated in my testing lab environment](dvportgroups.png)
|
||||||
|
|
||||||
|
Some networks have masks in the name, some don't; and some use an underscore (`_`) rather than a slash (`/`) to separate the network from the mask . Most networks correctly include the network address with a `0` in the last octet, but some use an `x` instead. And the VLANs associated with the networks have a varying number of digits. Consistency can be difficult so these are all things that I had to keep in mind as I worked on a solution which would make a true best effort at importing all of these.
|
||||||
|
|
||||||
|
As long as the dvPortGroup names stick to this format I can parse the name to come up with a description as well as the IP space of the network. The dvPortGroup also carries information about the associated VLAN, which is useful information to have. And I can easily export this information with a simple PowerCLI query:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
PS /home/john> get-vdportgroup | select Name, VlanConfiguration
|
||||||
|
|
||||||
|
Name VlanConfiguration
|
||||||
|
---- -----------------
|
||||||
|
MGT-Home 192.168.1.0
|
||||||
|
MGT-Servers 172.16.10.0 VLAN 1610
|
||||||
|
BOW-Servers 172.16.20.0 VLAN 1620
|
||||||
|
BOW-Servers 172.16.30.0 VLAN 1630
|
||||||
|
BOW-Servers 172.16.40.0 VLAN 1640
|
||||||
|
DRE-Servers 172.16.50.0 VLAN 1650
|
||||||
|
DRE-Servers 172.16.60.x VLAN 1660
|
||||||
|
VPOT8-Mgmt 172.20.10.0/27 VLAN 20
|
||||||
|
VPOT8-Servers 172.20.10.32/27 VLAN 30
|
||||||
|
VPOT8-Servers 172.20.10.64_26 VLAN 40
|
||||||
|
```
|
||||||
|
|
||||||
|
In my [homelab](/vmware-home-lab-on-intel-nuc-9/), I only have a single vCenter. In production, we've got a handful of vCenters, and each manages the hosts in a given region. So I can use information about which vCenter hosts a dvPortGroup to figure out which region a network is in. When I import this data into phpIPAM, I can use the vCenter name to assign [remote scan agents](https://github.com/jbowdre/phpipam-agent-docker) to networks based on the region that they're in. I can also grab information about which virtual datacenter a dvPortGroup lives in, which I'll use for grouping networks into sites or sections.
|
||||||
|
|
||||||
|
The vCenter can be found in the `Uid` property returned by `get-vdportgroup`:
|
||||||
|
```powershell
|
||||||
|
PS /home/john> get-vdportgroup | select Name, VlanConfiguration, Datacenter, Uid
|
||||||
|
|
||||||
|
Name VlanConfiguration Datacenter Uid
|
||||||
|
---- ----------------- ---------- ---
|
||||||
|
MGT-Home 192.168.1.0 Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-27015/
|
||||||
|
MGT-Servers 172.16.10.0 VLAN 1610 Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-27017/
|
||||||
|
BOW-Servers 172.16.20.0 VLAN 1620 Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-28010/
|
||||||
|
BOW-Servers 172.16.30.0 VLAN 1630 Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-28011/
|
||||||
|
BOW-Servers 172.16.40.0 VLAN 1640 Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-28012/
|
||||||
|
DRE-Servers 172.16.50.0 VLAN 1650 Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-28013/
|
||||||
|
DRE-Servers 172.16.60.x VLAN 1660 Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-28014/
|
||||||
|
VPOT8-Mgmt 172.20.10.0/… VLAN 20 Other Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-35018/
|
||||||
|
VPOT8-Servers 172.20.10… VLAN 30 Other Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-35019/
|
||||||
|
VPOT8-Servers 172.20.10… VLAN 40 Other Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-35020/
|
||||||
|
```
|
||||||
|
|
||||||
|
It's not pretty, but it'll do the trick. All that's left is to export this data into a handy-dandy CSV-formatted file that I can easily parse for import:
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
get-vdportgroup | select Name, VlanConfiguration, Datacenter, Uid | export-csv -NoTypeInformation ./networks.csv
|
||||||
|
```
|
||||||
|
![My networks.csv export, including the networks which don't match the naming criteria and will be skipped by the import process.](networks.csv.png)
|
||||||
|
|
||||||
|
### Setting up phpIPAM
|
||||||
|
After [deploying a fresh phpIPAM instance on my Tanzu Community Edition Kubernetes cluster](/tanzu-community-edition-k8s-homelab/#a-real-workload---phpipam), there are a few additional steps needed to enable API access. To start, I log in to my phpIPAM instance and navigate to the **Administration > Server Management > phpIPAM Settings** page, where I enabled both the *Prettify links* and *API* feature settings - making sure to hit the **Save** button at the bottom of the page once I do so.
|
||||||
|
![Enabling the API](server_settings.png)
|
||||||
|
|
||||||
|
Then I need to head to the **User Management** page to create a new user that will be used to authenticate against the API:
|
||||||
|
![New user creation](new_user.png)
|
||||||
|
|
||||||
|
And finally, I head to the **API** section to create a new API key with Read/Write permissions:
|
||||||
|
![API key creation](api_user.png)
|
||||||
|
|
||||||
|
I'm also going to head in to **Administration > IP Related Management > Sections** and delete the default sample sections so that the inventory will be nice and empty:
|
||||||
|
![We don't need no stinkin' sections!](empty_sections.png)
|
||||||
|
|
||||||
|
### Script time
|
||||||
|
Well that's enough prep work; now it's time for the Python3 [script](https://github.com/jbowdre/misc-scripts/blob/main/Python/phpipam-bulk-import.py):
|
||||||
|
|
||||||
|
```python
|
||||||
|
# The latest version of this script can be found on Github:
|
||||||
|
# https://github.com/jbowdre/misc-scripts/blob/main/Python/phpipam-bulk-import.py
|
||||||
|
|
||||||
|
import requests
|
||||||
|
from collections import namedtuple
|
||||||
|
|
||||||
|
check_cert = True
|
||||||
|
created = 0
|
||||||
|
remote_agent = False
|
||||||
|
name_to_id = namedtuple('name_to_id', ['name', 'id'])
|
||||||
|
|
||||||
|
## for testing only:
|
||||||
|
# from requests.packages.urllib3.exceptions import InsecureRequestWarning
|
||||||
|
# requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
|
||||||
|
# check_cert = False
|
||||||
|
|
||||||
|
## Makes sure input fields aren't blank.
|
||||||
|
def validate_input_is_not_empty(field, prompt):
|
||||||
|
while True:
|
||||||
|
user_input = input(f'\n{prompt}:\n')
|
||||||
|
if len(user_input) == 0:
|
||||||
|
print(f'[ERROR] {field} cannot be empty!')
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
return user_input
|
||||||
|
|
||||||
|
|
||||||
|
## Takes in a list of dictionary items, extracts all the unique values for a given key,
|
||||||
|
# and returns a sorted list of those.
|
||||||
|
def get_sorted_list_of_unique_values(key, list_of_dict):
|
||||||
|
valueSet = set(sub[key] for sub in list_of_dict)
|
||||||
|
valueList = list(valueSet)
|
||||||
|
valueList.sort()
|
||||||
|
return valueList
|
||||||
|
|
||||||
|
|
||||||
|
## Match names and IDs
|
||||||
|
def get_id_from_sets(name, sets):
|
||||||
|
return [item.id for item in sets if name == item.name][0]
|
||||||
|
|
||||||
|
|
||||||
|
## Authenticate to phpIPAM endpoint and return an auth token
|
||||||
|
def auth_session(uri, auth):
|
||||||
|
print(f'Authenticating to {uri}...')
|
||||||
|
try:
|
||||||
|
req = requests.post(f'{uri}/user/', auth=auth, verify=check_cert)
|
||||||
|
except:
|
||||||
|
raise requests.exceptions.RequestException
|
||||||
|
if req.status_code != 200:
|
||||||
|
print(f'[ERROR] Authentication failure: {req.json()}')
|
||||||
|
raise requests.exceptions.RequestException
|
||||||
|
token = {"token": req.json()['data']['token']}
|
||||||
|
print('\n[AUTH_SUCCESS] Authenticated successfully!')
|
||||||
|
return token
|
||||||
|
|
||||||
|
|
||||||
|
## Find or create a remote scan agent for each region (vcenter)
|
||||||
|
def get_agent_sets(uri, token, regions):
|
||||||
|
agent_sets = []
|
||||||
|
|
||||||
|
def create_agent_set(uri, token, name):
|
||||||
|
import secrets
|
||||||
|
# generate a random secret to be used for identifying this agent
|
||||||
|
payload = {
|
||||||
|
'name': name,
|
||||||
|
'type': 'mysql',
|
||||||
|
'code': secrets.base64.urlsafe_b64encode(secrets.token_bytes(24)).decode("utf-8"),
|
||||||
|
'description': f'Remote scan agent for region {name}'
|
||||||
|
}
|
||||||
|
req = requests.post(f'{uri}/tools/scanagents/', data=payload, headers=token, verify=check_cert)
|
||||||
|
id = req.json()['id']
|
||||||
|
agent_set = name_to_id(name, id)
|
||||||
|
print(f'[AGENT_CREATE] {name} created.')
|
||||||
|
return agent_set
|
||||||
|
|
||||||
|
for region in regions:
|
||||||
|
name = regions[region]['name']
|
||||||
|
req = requests.get(f'{uri}/tools/scanagents/?filter_by=name&filter_value={name}', headers=token, verify=check_cert)
|
||||||
|
if req.status_code == 200:
|
||||||
|
id = req.json()['data'][0]['id']
|
||||||
|
agent_set = name_to_id(name, id)
|
||||||
|
else:
|
||||||
|
agent_set = create_agent_set(uri, token, name)
|
||||||
|
agent_sets.append(agent_set)
|
||||||
|
return agent_sets
|
||||||
|
|
||||||
|
|
||||||
|
## Find or create a section for each virtual datacenter
|
||||||
|
def get_section(uri, token, section, parentSectionId):
|
||||||
|
|
||||||
|
def create_section(uri, token, section, parentSectionId):
|
||||||
|
payload = {
|
||||||
|
'name': section,
|
||||||
|
'masterSection': parentSectionId,
|
||||||
|
'permissions': '{"2":"2"}',
|
||||||
|
'showVLAN': '1'
|
||||||
|
}
|
||||||
|
req = requests.post(f'{uri}/sections/', data=payload, headers=token, verify=check_cert)
|
||||||
|
id = req.json()['id']
|
||||||
|
print(f'[SECTION_CREATE] Section {section} created.')
|
||||||
|
return id
|
||||||
|
|
||||||
|
req = requests.get(f'{uri}/sections/{section}/', headers=token, verify=check_cert)
|
||||||
|
if req.status_code == 200:
|
||||||
|
id = req.json()['data']['id']
|
||||||
|
else:
|
||||||
|
id = create_section(uri, token, section, parentSectionId)
|
||||||
|
return id
|
||||||
|
|
||||||
|
|
||||||
|
## Find or create VLANs
|
||||||
|
def get_vlan_sets(uri, token, vlans):
|
||||||
|
vlan_sets = []
|
||||||
|
|
||||||
|
def create_vlan_set(uri, token, vlan):
|
||||||
|
payload = {
|
||||||
|
'name': f'VLAN {vlan}',
|
||||||
|
'number': vlan
|
||||||
|
}
|
||||||
|
req = requests.post(f'{uri}/vlan/', data=payload, headers=token, verify=check_cert)
|
||||||
|
id = req.json()['id']
|
||||||
|
vlan_set = name_to_id(vlan, id)
|
||||||
|
print(f'[VLAN_CREATE] VLAN {vlan} created.')
|
||||||
|
return vlan_set
|
||||||
|
|
||||||
|
for vlan in vlans:
|
||||||
|
if vlan != 0:
|
||||||
|
req = requests.get(f'{uri}/vlan/?filter_by=number&filter_value={vlan}', headers=token, verify=check_cert)
|
||||||
|
if req.status_code == 200:
|
||||||
|
id = req.json()['data'][0]['vlanId']
|
||||||
|
vlan_set = name_to_id(vlan, id)
|
||||||
|
else:
|
||||||
|
vlan_set = create_vlan_set(uri, token, vlan)
|
||||||
|
vlan_sets.append(vlan_set)
|
||||||
|
return vlan_sets
|
||||||
|
|
||||||
|
|
||||||
|
## Find or create nameserver configurations for each region
|
||||||
|
def get_nameserver_sets(uri, token, regions):
|
||||||
|
|
||||||
|
nameserver_sets = []
|
||||||
|
|
||||||
|
def create_nameserver_set(uri, token, name, nameservers):
|
||||||
|
payload = {
|
||||||
|
'name': name,
|
||||||
|
'namesrv1': nameservers,
|
||||||
|
'description': f'Nameserver created for region {name}'
|
||||||
|
}
|
||||||
|
req = requests.post(f'{uri}/tools/nameservers/', data=payload, headers=token, verify=check_cert)
|
||||||
|
id = req.json()['id']
|
||||||
|
nameserver_set = name_to_id(name, id)
|
||||||
|
print(f'[NAMESERVER_CREATE] Nameserver {name} created.')
|
||||||
|
return nameserver_set
|
||||||
|
|
||||||
|
for region in regions:
|
||||||
|
name = regions[region]['name']
|
||||||
|
req = requests.get(f'{uri}/tools/nameservers/?filter_by=name&filter_value={name}', headers=token, verify=check_cert)
|
||||||
|
if req.status_code == 200:
|
||||||
|
id = req.json()['data'][0]['id']
|
||||||
|
nameserver_set = name_to_id(name, id)
|
||||||
|
else:
|
||||||
|
nameserver_set = create_nameserver_set(uri, token, name, regions[region]['nameservers'])
|
||||||
|
nameserver_sets.append(nameserver_set)
|
||||||
|
return nameserver_sets
|
||||||
|
|
||||||
|
|
||||||
|
## Find or create subnet for each dvPortGroup
|
||||||
|
def create_subnet(uri, token, network):
|
||||||
|
|
||||||
|
def update_nameserver_permissions(uri, token, network):
|
||||||
|
nameserverId = network['nameserverId']
|
||||||
|
sectionId = network['sectionId']
|
||||||
|
req = requests.get(f'{uri}/tools/nameservers/{nameserverId}/', headers=token, verify=check_cert)
|
||||||
|
permissions = req.json()['data']['permissions']
|
||||||
|
permissions = str(permissions).split(';')
|
||||||
|
if not sectionId in permissions:
|
||||||
|
permissions.append(sectionId)
|
||||||
|
if 'None' in permissions:
|
||||||
|
permissions.remove('None')
|
||||||
|
permissions = ';'.join(permissions)
|
||||||
|
payload = {
|
||||||
|
'permissions': permissions
|
||||||
|
}
|
||||||
|
req = requests.patch(f'{uri}/tools/nameservers/{nameserverId}/', data=payload, headers=token, verify=check_cert)
|
||||||
|
|
||||||
|
payload = {
|
||||||
|
'subnet': network['subnet'],
|
||||||
|
'mask': network['mask'],
|
||||||
|
'description': network['name'],
|
||||||
|
'sectionId': network['sectionId'],
|
||||||
|
'scanAgent': network['agentId'],
|
||||||
|
'nameserverId': network['nameserverId'],
|
||||||
|
'vlanId': network['vlanId'],
|
||||||
|
'pingSubnet': '1',
|
||||||
|
'discoverSubnet': '1',
|
||||||
|
'resolveDNS': '1',
|
||||||
|
'DNSrecords': '1'
|
||||||
|
}
|
||||||
|
req = requests.post(f'{uri}/subnets/', data=payload, headers=token, verify=check_cert)
|
||||||
|
if req.status_code == 201:
|
||||||
|
network['subnetId'] = req.json()['id']
|
||||||
|
update_nameserver_permissions(uri, token, network)
|
||||||
|
print(f"[SUBNET_CREATE] Created subnet {req.json()['data']}")
|
||||||
|
global created
|
||||||
|
created += 1
|
||||||
|
elif req.status_code == 409:
|
||||||
|
print(f"[SUBNET_EXISTS] Subnet {network['subnet']}/{network['mask']} already exists.")
|
||||||
|
else:
|
||||||
|
print(f"[ERROR] Problem creating subnet {network['name']}: {req.json()}")
|
||||||
|
|
||||||
|
|
||||||
|
## Import list of networks from the specified CSV file
|
||||||
|
def import_networks(filepath):
|
||||||
|
print(f'Importing networks from {filepath}...')
|
||||||
|
import csv
|
||||||
|
import re
|
||||||
|
ipPattern = re.compile('\d{1,3}\.\d{1,3}\.\d{1,3}\.[0-9xX]{1,3}')
|
||||||
|
networks = []
|
||||||
|
with open(filepath) as csv_file:
|
||||||
|
reader = csv.DictReader(csv_file)
|
||||||
|
line_count = 0
|
||||||
|
for row in reader:
|
||||||
|
network = {}
|
||||||
|
if line_count > 0:
|
||||||
|
if(re.search(ipPattern, row['Name'])):
|
||||||
|
network['subnet'] = re.findall(ipPattern, row['Name'])[0]
|
||||||
|
if network['subnet'].split('.')[-1].lower() == 'x':
|
||||||
|
network['subnet'] = network['subnet'].lower().replace('x', '0')
|
||||||
|
network['name'] = row['Name']
|
||||||
|
if '/' in row['Name'][-3]:
|
||||||
|
network['mask'] = row['Name'].split('/')[-1]
|
||||||
|
elif '_' in row['Name'][-3]:
|
||||||
|
network['mask'] = row['Name'].split('_')[-1]
|
||||||
|
else:
|
||||||
|
network['mask'] = '24'
|
||||||
|
network['section'] = row['Datacenter']
|
||||||
|
try:
|
||||||
|
network['vlan'] = int(row['VlanConfiguration'].split('VLAN ')[1])
|
||||||
|
except:
|
||||||
|
network['vlan'] = 0
|
||||||
|
network['vcenter'] = f"{(row['Uid'].split('@'))[1].split(':')[0].split('.')[0]}"
|
||||||
|
networks.append(network)
|
||||||
|
line_count += 1
|
||||||
|
print(f'Processed {line_count} lines and found:')
|
||||||
|
return networks
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
import socket
|
||||||
|
import getpass
|
||||||
|
import argparse
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument("filepath", type=Path)
|
||||||
|
|
||||||
|
# Accept CSV file as an argument to the script or prompt for input if necessary
|
||||||
|
try:
|
||||||
|
p = parser.parse_args()
|
||||||
|
filepath = p.filepath
|
||||||
|
except:
|
||||||
|
# make sure filepath is a path to an actual file
|
||||||
|
print("""\n\n
|
||||||
|
This script helps to add vSphere networks to phpIPAM for IP address management. It is expected
|
||||||
|
that the vSphere networks are configured as portgroups on distributed virtual switches and
|
||||||
|
named like '[Description] [Subnet IP]{/[mask]}' (ex: 'LAB-Servers 192.168.1.0'). The following PowerCLI
|
||||||
|
command can be used to export the networks from vSphere:
|
||||||
|
|
||||||
|
Get-VDPortgroup | Select Name, Datacenter, VlanConfiguration, Uid | Export-Csv -NoTypeInformation ./networks.csv
|
||||||
|
|
||||||
|
Subnets added to phpIPAM will be automatically configured for monitoring either using the built-in
|
||||||
|
scan agent (default) or a new remote scan agent for each vCenter.
|
||||||
|
""")
|
||||||
|
while True:
|
||||||
|
filepath = Path(validate_input_is_not_empty('Filepath', 'Path to CSV-formatted export from vCenter'))
|
||||||
|
if filepath.exists():
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
print(f'[ERROR] Unable to find file at {filepath.name}.')
|
||||||
|
continue
|
||||||
|
|
||||||
|
# get collection of networks to import
|
||||||
|
networks = import_networks(filepath)
|
||||||
|
networkNames = get_sorted_list_of_unique_values('name', networks)
|
||||||
|
print(f'\n- {len(networkNames)} networks:\n\t{networkNames}')
|
||||||
|
vcenters = get_sorted_list_of_unique_values('vcenter', networks)
|
||||||
|
print(f'\n- {len(vcenters)} vCenter servers:\n\t{vcenters}')
|
||||||
|
vlans = get_sorted_list_of_unique_values('vlan', networks)
|
||||||
|
print(f'\n- {len(vlans)} VLANs:\n\t{vlans}')
|
||||||
|
sections = get_sorted_list_of_unique_values('section', networks)
|
||||||
|
print(f'\n- {len(sections)} Datacenters:\n\t{sections}')
|
||||||
|
|
||||||
|
regions = {}
|
||||||
|
for vcenter in vcenters:
|
||||||
|
nameservers = None
|
||||||
|
name = validate_input_is_not_empty('Region Name', f'Region name for vCenter {vcenter}')
|
||||||
|
for region in regions:
|
||||||
|
if name in regions[region]['name']:
|
||||||
|
nameservers = regions[region]['nameservers']
|
||||||
|
if not nameservers:
|
||||||
|
nameservers = validate_input_is_not_empty('Nameserver IPs', f"Comma-separated list of nameserver IPs in {name}")
|
||||||
|
nameservers = nameservers.replace(',',';').replace(' ','')
|
||||||
|
regions[vcenter] = {'name': name, 'nameservers': nameservers}
|
||||||
|
|
||||||
|
# make sure hostname resolves
|
||||||
|
while True:
|
||||||
|
hostname = input('\nFully-qualified domain name of the phpIPAM host:\n')
|
||||||
|
if len(hostname) == 0:
|
||||||
|
print('[ERROR] Hostname cannot be empty.')
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
test = socket.gethostbyname(hostname)
|
||||||
|
except:
|
||||||
|
print(f'[ERROR] Unable to resolve {hostname}.')
|
||||||
|
continue
|
||||||
|
else:
|
||||||
|
del test
|
||||||
|
break
|
||||||
|
|
||||||
|
username = validate_input_is_not_empty('Username', f'Username with read/write access to {hostname}')
|
||||||
|
password = getpass.getpass(f'Password for {username}:\n')
|
||||||
|
apiAppId = validate_input_is_not_empty('App ID', f'App ID for API key (from https://{hostname}/administration/api/)')
|
||||||
|
|
||||||
|
agent = input('\nUse per-region remote scan agents instead of a single local scanner? (y/N):\n')
|
||||||
|
try:
|
||||||
|
if agent.lower()[0] == 'y':
|
||||||
|
global remote_agent
|
||||||
|
remote_agent = True
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
|
||||||
|
proceed = input(f'\n\nProceed with importing {len(networkNames)} networks to {hostname}? (y/N):\n')
|
||||||
|
try:
|
||||||
|
if proceed.lower()[0] == 'y':
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
import sys
|
||||||
|
sys.exit("Operation aborted.")
|
||||||
|
except:
|
||||||
|
import sys
|
||||||
|
sys.exit("Operation aborted.")
|
||||||
|
del proceed
|
||||||
|
|
||||||
|
# assemble variables
|
||||||
|
uri = f'https://{hostname}/api/{apiAppId}'
|
||||||
|
auth = (username, password)
|
||||||
|
|
||||||
|
# auth to phpIPAM
|
||||||
|
token = auth_session(uri, auth)
|
||||||
|
|
||||||
|
# create nameserver entries
|
||||||
|
nameserver_sets = get_nameserver_sets(uri, token, regions)
|
||||||
|
vlan_sets = get_vlan_sets(uri, token, vlans)
|
||||||
|
if remote_agent:
|
||||||
|
agent_sets = get_agent_sets(uri, token, regions)
|
||||||
|
|
||||||
|
# create the networks
|
||||||
|
for network in networks:
|
||||||
|
network['region'] = regions[network['vcenter']]['name']
|
||||||
|
network['regionId'] = get_section(uri, token, network['region'], None)
|
||||||
|
network['nameserverId'] = get_id_from_sets(network['region'], nameserver_sets)
|
||||||
|
network['sectionId'] = get_section(uri, token, network['section'], network['regionId'])
|
||||||
|
if network['vlan'] == 0:
|
||||||
|
network['vlanId'] = None
|
||||||
|
else:
|
||||||
|
network['vlanId'] = get_id_from_sets(network['vlan'], vlan_sets)
|
||||||
|
if remote_agent:
|
||||||
|
network['agentId'] = get_id_from_sets(network['region'], agent_sets)
|
||||||
|
else:
|
||||||
|
network['agentId'] = '1'
|
||||||
|
create_subnet(uri, token, network)
|
||||||
|
|
||||||
|
print(f'\n[FINISH] Created {created} of {len(networks)} networks.')
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll run it and provide the path to the network export CSV file:
|
||||||
|
```bash
|
||||||
|
python3 phpipam-bulk-import.py ~/networks.csv
|
||||||
|
```
|
||||||
|
|
||||||
|
The script will print out a little descriptive bit about what sort of networks it's going to try to import and then will straight away start processing the file to identify the networks, vCenters, VLANs, and datacenters which will be imported:
|
||||||
|
|
||||||
|
```
|
||||||
|
Importing networks from /home/john/networks.csv...
|
||||||
|
Processed 17 lines and found:
|
||||||
|
|
||||||
|
- 10 networks:
|
||||||
|
['BOW-Servers 172.16.20.0', 'BOW-Servers 172.16.30.0', 'BOW-Servers 172.16.40.0', 'DRE-Servers 172.16.50.0', 'DRE-Servers 172.16.60.x', 'MGT-Home 192.168.1.0', 'MGT-Servers 172.16.10.0', 'VPOT8-Mgmt 172.20.10.0/27', 'VPOT8-Servers 172.20.10.32/27', 'VPOT8-Servers 172.20.10.64_26']
|
||||||
|
|
||||||
|
- 1 vCenter servers:
|
||||||
|
['vcsa']
|
||||||
|
|
||||||
|
- 10 VLANs:
|
||||||
|
[0, 20, 30, 40, 1610, 1620, 1630, 1640, 1650, 1660]
|
||||||
|
|
||||||
|
- 2 Datacenters:
|
||||||
|
['Lab', 'Other Lab']
|
||||||
|
```
|
||||||
|
|
||||||
|
It then starts prompting for the additional details which will be needed:
|
||||||
|
|
||||||
|
```
|
||||||
|
Region name for vCenter vcsa:
|
||||||
|
Labby
|
||||||
|
|
||||||
|
Comma-separated list of nameserver IPs in Lab vCenter:
|
||||||
|
192.168.1.5
|
||||||
|
|
||||||
|
Fully-qualified domain name of the phpIPAM host:
|
||||||
|
ipam-k8s.lab.bowdre.net
|
||||||
|
|
||||||
|
Username with read/write access to ipam-k8s.lab.bowdre.net:
|
||||||
|
api-user
|
||||||
|
Password for api-user:
|
||||||
|
|
||||||
|
|
||||||
|
App ID for API key (from https://ipam-k8s.lab.bowdre.net/administration/api/):
|
||||||
|
api-user
|
||||||
|
|
||||||
|
Use per-region remote scan agents instead of a single local scanner? (y/N):
|
||||||
|
y
|
||||||
|
```
|
||||||
|
|
||||||
|
Up to this point, the script has only been processing data locally, getting things ready for talking to the phpIPAM API. But now, it prompts to confirm that we actually want to do the thing (yes please) and then gets to work:
|
||||||
|
|
||||||
|
```
|
||||||
|
Proceed with importing 10 networks to ipam-k8s.lab.bowdre.net? (y/N):
|
||||||
|
y
|
||||||
|
Authenticating to https://ipam-k8s.lab.bowdre.net/api/api-user...
|
||||||
|
|
||||||
|
[AUTH_SUCCESS] Authenticated successfully!
|
||||||
|
[VLAN_CREATE] VLAN 20 created.
|
||||||
|
[VLAN_CREATE] VLAN 30 created.
|
||||||
|
[VLAN_CREATE] VLAN 40 created.
|
||||||
|
[VLAN_CREATE] VLAN 1610 created.
|
||||||
|
[VLAN_CREATE] VLAN 1620 created.
|
||||||
|
[VLAN_CREATE] VLAN 1630 created.
|
||||||
|
[VLAN_CREATE] VLAN 1640 created.
|
||||||
|
[VLAN_CREATE] VLAN 1650 created.
|
||||||
|
[VLAN_CREATE] VLAN 1660 created.
|
||||||
|
[SECTION_CREATE] Section Labby created.
|
||||||
|
[SECTION_CREATE] Section Lab created.
|
||||||
|
[SUBNET_CREATE] Created subnet 192.168.1.0/24
|
||||||
|
[SUBNET_CREATE] Created subnet 172.16.10.0/24
|
||||||
|
[SUBNET_CREATE] Created subnet 172.16.20.0/24
|
||||||
|
[SUBNET_CREATE] Created subnet 172.16.30.0/24
|
||||||
|
[SUBNET_CREATE] Created subnet 172.16.40.0/24
|
||||||
|
[SUBNET_CREATE] Created subnet 172.16.50.0/24
|
||||||
|
[SUBNET_CREATE] Created subnet 172.16.60.0/24
|
||||||
|
[SECTION_CREATE] Section Other Lab created.
|
||||||
|
[SUBNET_CREATE] Created subnet 172.20.10.0/27
|
||||||
|
[SUBNET_CREATE] Created subnet 172.20.10.32/27
|
||||||
|
[SUBNET_CREATE] Created subnet 172.20.10.64/26
|
||||||
|
|
||||||
|
[FINISH] Created 10 of 10 networks.
|
||||||
|
```
|
||||||
|
|
||||||
|
Success! Now I can log in to my phpIPAM instance and check out my newly-imported subnets:
|
||||||
|
![New subnets!](created_subnets.png)
|
||||||
|
|
||||||
|
Even the one with the weird name formatting was parsed and imported correctly:
|
||||||
|
![Subnet details](subnet_detail.png)
|
||||||
|
|
||||||
|
So now phpIPAM knows about the vSphere networks I care about, and it can keep track of which vLAN and nameservers go with which networks. Great! But it still isn't scanning or monitoring those networks, even though I told the script that I wanted to use a remote scan agent. And I can check in the **Administration > Server management > Scan agents** section of the phpIPAM interface to see my newly-created agent configuration.
|
||||||
|
![New agent config](agent_config.png)
|
||||||
|
|
||||||
|
... but I haven't actually *deployed* an agent yet. I'll do that by following the same basic steps [described here](/tanzu-community-edition-k8s-homelab/#phpipam-agent) to spin up my `phpipam-agent` on Kubernetes, and I'll plug in that automagically-generated code for the `IPAM_AGENT_KEY` environment variable:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: phpipam-agent
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: phpipam-agent
|
||||||
|
replicas: 1
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: phpipam-agent
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: phpipam-agent
|
||||||
|
image: ghcr.io/jbowdre/phpipam-agent:latest
|
||||||
|
env:
|
||||||
|
- name: IPAM_DATABASE_HOST
|
||||||
|
value: "ipam-k8s.lab.bowdre.net"
|
||||||
|
- name: IPAM_DATABASE_NAME
|
||||||
|
value: "phpipam"
|
||||||
|
- name: IPAM_DATABASE_USER
|
||||||
|
value: "phpipam"
|
||||||
|
- name: IPAM_DATABASE_PASS
|
||||||
|
value: "VMware1!"
|
||||||
|
- name: IPAM_DATABASE_PORT
|
||||||
|
value: "3306"
|
||||||
|
- name: IPAM_AGENT_KEY
|
||||||
|
value: "CxtRbR81r1ojVL2epG90JaShxIUBl0bT"
|
||||||
|
- name: IPAM_SCAN_INTERVAL
|
||||||
|
value: "15m"
|
||||||
|
- name: IPAM_RESET_AUTODISCOVER
|
||||||
|
value: "false"
|
||||||
|
- name: IPAM_REMOVE_DHCP
|
||||||
|
value: "false"
|
||||||
|
- name: TZ
|
||||||
|
value: "UTC"
|
||||||
|
```
|
||||||
|
|
||||||
|
I kick it off with a `kubectl apply` command and check back a few minutes later (after the 15-minute interval defined in the above YAML) to see that it worked, the remote agent scanned like it was supposed to and is reporting IP status back to the phpIPAM database server:
|
||||||
|
![Newly-discovered IPs](discovered_ips.png)
|
||||||
|
|
||||||
|
I think I've got some more tweaks to do with this environment (why isn't phpIPAM resolving hostnames despite the correct DNS servers getting configured?) but this at least demonstrates a successful proof-of-concept import thanks to my Python script. Sure, I only imported 10 networks here, but I feel like I'm ready to process the several hundred which are available in our production environment now.
|
||||||
|
|
||||||
|
And who knows, maybe this script will come in handy for someone else. Until next time!
|
After Width: | Height: | Size: 416 KiB |
After Width: | Height: | Size: 727 KiB |
After Width: | Height: | Size: 247 KiB |
After Width: | Height: | Size: 1.6 MiB |
After Width: | Height: | Size: 889 KiB |
|
@ -2,6 +2,7 @@
|
||||||
series: Tips
|
series: Tips
|
||||||
date: "2020-12-23T08:34:30Z"
|
date: "2020-12-23T08:34:30Z"
|
||||||
thumbnail: -lp1-DGiM.png
|
thumbnail: -lp1-DGiM.png
|
||||||
|
usePageBundles: true
|
||||||
tags:
|
tags:
|
||||||
- chromeos
|
- chromeos
|
||||||
title: Burn an ISO to USB with the Chromebook Recovery Utility
|
title: Burn an ISO to USB with the Chromebook Recovery Utility
|
||||||
|
|
After Width: | Height: | Size: 81 KiB |
After Width: | Height: | Size: 242 KiB |
After Width: | Height: | Size: 57 KiB |
After Width: | Height: | Size: 147 KiB |
After Width: | Height: | Size: 144 KiB |
After Width: | Height: | Size: 54 KiB |
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 234 KiB |
After Width: | Height: | Size: 124 KiB |
After Width: | Height: | Size: 166 KiB |
After Width: | Height: | Size: 121 KiB |
After Width: | Height: | Size: 89 KiB |
After Width: | Height: | Size: 219 KiB |
After Width: | Height: | Size: 318 KiB |
After Width: | Height: | Size: 91 KiB |
After Width: | Height: | Size: 116 KiB |
After Width: | Height: | Size: 118 KiB |
|
@ -0,0 +1,421 @@
|
||||||
|
---
|
||||||
|
series: vRA8
|
||||||
|
date: "2021-08-13T00:00:00Z"
|
||||||
|
lastmod: "2022-01-18"
|
||||||
|
usePageBundles: true
|
||||||
|
thumbnail: 20210813_workflow_success.png
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- vra
|
||||||
|
- vro
|
||||||
|
- javascript
|
||||||
|
- powershell
|
||||||
|
- automation
|
||||||
|
title: Creating static records in Microsoft DNS from vRealize Automation
|
||||||
|
---
|
||||||
|
One of the requirements for my vRA deployments is the ability to automatically create a static `A` records for non-domain-joined systems so that users can connect without needing to know the IP address. The organization uses Microsoft DNS servers to provide resolution on the internal domain. At first glance, this shouldn't be too much of a problem: vRealize Orchestrator 8.x can run PowerShell scripts, and PowerShell can use the [`Add-DnsServerResourceRecord` cmdlet](https://docs.microsoft.com/en-us/powershell/module/dnsserver/add-dnsserverresourcerecord?view=windowsserver2019-ps) to create the needed records.
|
||||||
|
|
||||||
|
Not so fast, though. That cmdlet is provided through the [Remote Server Administration Tools](https://docs.microsoft.com/en-us/troubleshoot/windows-server/system-management-components/remote-server-administration-tools) package so it won't be available within the limited PowerShell environment inside of vRO. A workaround might be to add a Windows machine to vRO as a remote PowerShell host, but then you run into [issues of credential hopping](https://communities.vmware.com/t5/vRealize-Orchestrator/unable-to-run-get-DnsServerResourceRecord-via-vRO-Powershell-wf/m-p/2286685).
|
||||||
|
|
||||||
|
I eventually came across [this blog post](https://www.virtualnebula.com/blog/2017/7/14/microsoft-ad-dns-integration-over-ssh) which described adding a Windows machine as a remote *SSH* host instead. I'll deviate a bit from the described configuration, but that post did at least get me pointed in the right direction. This approach would get around the complicated authentication-tunneling business while still being pretty easy to set up. So let's go!
|
||||||
|
|
||||||
|
### Preparing the SSH host
|
||||||
|
I deployed a Windows Server 2019 Core VM to use as my SSH host, and I joined it to my AD domain as `win02.lab.bowdre.net`. Once that's taken care of, I need to install the RSAT DNS tools so that I can use the `Add-DnsServerResourceRecord` and associated cmdlets. I can do that through PowerShell like so:
|
||||||
|
```powershell
|
||||||
|
# Install RSAT DNS tools
|
||||||
|
Add-WindowsCapability -online -name Rsat.Dns.Tools~~~~0.0.1.0
|
||||||
|
```
|
||||||
|
|
||||||
|
Instead of using a third-party SSH server, I'll use the OpenSSH Server that's already available in Windows 10 (1809+) and Server 2019:
|
||||||
|
```powershell
|
||||||
|
# Install OpenSSH Server
|
||||||
|
Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll also want to set it so that the default shell upon SSH login is PowerShell (rather than the standard Command Prompt) so that I can have easy access to those DNS cmdlets:
|
||||||
|
```powershell
|
||||||
|
# Set PowerShell as the default Shell (for access to DNS cmdlets)
|
||||||
|
New-ItemProperty -Path "HKLM:\SOFTWARE\OpenSSH" -Name DefaultShell -Value "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -PropertyType String -Force
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll be using my `lab\vra` service account for managing DNS. I've already given it the appropriate rights on the DNS server, but I'll also add it to the Administrators group on my SSH host:
|
||||||
|
```powershell
|
||||||
|
# Add the service account as a local administrator
|
||||||
|
Add-LocalGroupMember -Group Administrators -Member "lab\vra"
|
||||||
|
```
|
||||||
|
|
||||||
|
And I'll modify the OpenSSH configuration so that only members of that Administrators group are permitted to log into the server via SSH:
|
||||||
|
```powershell
|
||||||
|
# Restrict SSH access to members in the local Administrators group
|
||||||
|
(Get-Content "C:\ProgramData\ssh\sshd_config") -Replace "# Authentication:", "$&`nAllowGroups Administrators" | Set-Content "C:\ProgramData\ssh\sshd_config"
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, I'll start the `sshd` service and set it to start up automatically:
|
||||||
|
```powershell
|
||||||
|
# Start service and set it to automatic
|
||||||
|
Set-Service -Name sshd -StartupType Automatic -Status Running
|
||||||
|
```
|
||||||
|
|
||||||
|
#### A quick test
|
||||||
|
At this point, I can log in to the server via SSH and confirm that I can create and delete records in my DNS zone:
|
||||||
|
```powershell
|
||||||
|
$ ssh vra@win02.lab.bowdre.net
|
||||||
|
vra@win02.lab.bowdre.net's password:
|
||||||
|
|
||||||
|
Windows PowerShell
|
||||||
|
Copyright (C) Microsoft Corporation. All rights reserved.
|
||||||
|
|
||||||
|
PS C:\Users\vra> Add-DnsServerResourceRecordA -ComputerName win01.lab.bowdre.net -Name testy -ZoneName lab.bowdre.net -AllowUpdateAny -IPv4Address 172.16.99.99
|
||||||
|
|
||||||
|
PS C:\Users\vra> nslookup testy
|
||||||
|
Server: win01.lab.bowdre.net
|
||||||
|
Address: 192.168.1.5
|
||||||
|
|
||||||
|
Name: testy.lab.bowdre.net
|
||||||
|
Address: 172.16.99.99
|
||||||
|
|
||||||
|
PS C:\Users\vra> Remove-DnsServerResourceRecord -ComputerName win01.lab.bowdre.net -Name testy -ZoneName lab.bowdre.net -RRType A -Force
|
||||||
|
|
||||||
|
PS C:\Users\vra> nslookup testy
|
||||||
|
Server: win01.lab.bowdre.net
|
||||||
|
Address: 192.168.1.5
|
||||||
|
|
||||||
|
*** win01.lab.bowdre.net can't find testy: Non-existent domain
|
||||||
|
```
|
||||||
|
|
||||||
|
Cool! Now I just need to do that same thing, but from vRealize Orchestrator. First, though, I'll update the template so the requester can choose whether or not a static record will get created.
|
||||||
|
|
||||||
|
### Template changes
|
||||||
|
#### Cloud Template
|
||||||
|
Similar to the template changes I made for [optionally joining deployed servers to the Active Directory domain](/joining-vms-to-active-directory-in-site-specific-ous-with-vra8#cloud-template), I'll just be adding a simple boolean checkbox to the `inputs` section of the template in Cloud Assembly:
|
||||||
|
```yaml
|
||||||
|
formatVersion: 1
|
||||||
|
inputs:
|
||||||
|
[...]
|
||||||
|
staticDns:
|
||||||
|
title: Create static DNS record
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
*Unlike* the AD piece, in the `resources` section I'll just bind a custom property called `staticDns` to the input with the same name:
|
||||||
|
```yaml
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
[...]
|
||||||
|
staticDns: '${input.staticDns}'
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
So here's the complete cloud template that I've been working on:
|
||||||
|
```yaml
|
||||||
|
formatVersion: 1
|
||||||
|
inputs:
|
||||||
|
site:
|
||||||
|
type: string
|
||||||
|
title: Site
|
||||||
|
enum:
|
||||||
|
- BOW
|
||||||
|
- DRE
|
||||||
|
image:
|
||||||
|
type: string
|
||||||
|
title: Operating System
|
||||||
|
oneOf:
|
||||||
|
- title: Windows Server 2019
|
||||||
|
const: ws2019
|
||||||
|
default: ws2019
|
||||||
|
size:
|
||||||
|
title: Resource Size
|
||||||
|
type: string
|
||||||
|
oneOf:
|
||||||
|
- title: 'Micro [1vCPU|1GB]'
|
||||||
|
const: micro
|
||||||
|
- title: 'Tiny [1vCPU|2GB]'
|
||||||
|
const: tiny
|
||||||
|
- title: 'Small [2vCPU|2GB]'
|
||||||
|
const: small
|
||||||
|
default: small
|
||||||
|
network:
|
||||||
|
title: Network
|
||||||
|
type: string
|
||||||
|
adJoin:
|
||||||
|
title: Join to AD domain
|
||||||
|
type: boolean
|
||||||
|
default: true
|
||||||
|
staticDns:
|
||||||
|
title: Create static DNS record
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
environment:
|
||||||
|
type: string
|
||||||
|
title: Environment
|
||||||
|
oneOf:
|
||||||
|
- title: Development
|
||||||
|
const: D
|
||||||
|
- title: Testing
|
||||||
|
const: T
|
||||||
|
- title: Production
|
||||||
|
const: P
|
||||||
|
default: D
|
||||||
|
function:
|
||||||
|
type: string
|
||||||
|
title: Function Code
|
||||||
|
oneOf:
|
||||||
|
- title: Application (APP)
|
||||||
|
const: APP
|
||||||
|
- title: Desktop (DSK)
|
||||||
|
const: DSK
|
||||||
|
- title: Network (NET)
|
||||||
|
const: NET
|
||||||
|
- title: Service (SVS)
|
||||||
|
const: SVS
|
||||||
|
- title: Testing (TST)
|
||||||
|
const: TST
|
||||||
|
default: TST
|
||||||
|
app:
|
||||||
|
type: string
|
||||||
|
title: Application Code
|
||||||
|
minLength: 3
|
||||||
|
maxLength: 3
|
||||||
|
default: xxx
|
||||||
|
description:
|
||||||
|
type: string
|
||||||
|
title: Description
|
||||||
|
description: Server function/purpose
|
||||||
|
default: Testing and evaluation
|
||||||
|
poc_name:
|
||||||
|
type: string
|
||||||
|
title: Point of Contact Name
|
||||||
|
default: Jack Shephard
|
||||||
|
poc_email:
|
||||||
|
type: string
|
||||||
|
title: Point of Contact Email
|
||||||
|
default: jack.shephard@virtuallypotato.com
|
||||||
|
pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$'
|
||||||
|
ticket:
|
||||||
|
type: string
|
||||||
|
title: Ticket/Request Number
|
||||||
|
default: 4815162342
|
||||||
|
resources:
|
||||||
|
Cloud_vSphere_Machine_1:
|
||||||
|
type: Cloud.vSphere.Machine
|
||||||
|
properties:
|
||||||
|
image: '${input.image}'
|
||||||
|
flavor: '${input.size}'
|
||||||
|
site: '${input.site}'
|
||||||
|
environment: '${input.environment}'
|
||||||
|
function: '${input.function}'
|
||||||
|
app: '${input.app}'
|
||||||
|
ignoreActiveDirectory: '${!input.adJoin}'
|
||||||
|
activeDirectory:
|
||||||
|
relativeDN: '${"OU=Servers,OU=Computers,OU=" + input.site + ",OU=LAB"}'
|
||||||
|
customizationSpec: '${input.adJoin ? "vra-win-domain" : "vra-win-workgroup"}'
|
||||||
|
staticDns: '${input.staticDns}'
|
||||||
|
dnsDomain: lab.bowdre.net
|
||||||
|
poc: '${input.poc_name + " (" + input.poc_email + ")"}'
|
||||||
|
ticket: '${input.ticket}'
|
||||||
|
description: '${input.description}'
|
||||||
|
networks:
|
||||||
|
- network: '${resource.Cloud_vSphere_Network_1.id}'
|
||||||
|
assignment: static
|
||||||
|
constraints:
|
||||||
|
- tag: 'comp:${to_lower(input.site)}'
|
||||||
|
Cloud_vSphere_Network_1:
|
||||||
|
type: Cloud.vSphere.Network
|
||||||
|
properties:
|
||||||
|
networkType: existing
|
||||||
|
constraints:
|
||||||
|
- tag: 'net:${input.network}'
|
||||||
|
```
|
||||||
|
I save the template, and then also hit the "Version" button to publish a new version to the catalog:
|
||||||
|
![Releasing new version](20210803_new_template_version.png)
|
||||||
|
|
||||||
|
#### Service Broker Custom Form
|
||||||
|
I switch over to the Service Broker UI to update the custom form - but first I stop off at **Content & Policies > Content Sources**, select my Content Source, and hit the **Save & Import** button to force a sync of the cloud templates. I can then move on to the **Content & Policies > Content** section, click the 3-dot menu next to my template name, and select the option to **Customize Form**.
|
||||||
|
|
||||||
|
I'll just drag the new Schema Element called `Create static DNS record` from the Request Inputs panel and on to the form canvas. I'll drop it right below the `Join to AD domain` field:
|
||||||
|
![Adding the field to the form](20210803_updating_custom_form.png)
|
||||||
|
|
||||||
|
And then I'll hit the **Save** button so that my efforts are preserved.
|
||||||
|
|
||||||
|
That should take care of the front-end changes. Now for the back-end stuff: I need to teach vRO how to connect to my SSH host and run the PowerShell commands, [just like I tested earlier](#a-quick-test).
|
||||||
|
|
||||||
|
|
||||||
|
### The vRO solution
|
||||||
|
I will be adding the DNS action on to my existing "VM Post-Provisioning" workflow (described [here](/adding-vm-notes-and-custom-attributes-with-vra8), which gets triggered after the VM has been successfully deployed.
|
||||||
|
|
||||||
|
#### Configuration Element
|
||||||
|
But first, I'm going to go to the **Assets > Configurations** section of the Orchestrator UI and create a new Configuration Element to store variables related to the SSH host and DNS configuration.
|
||||||
|
![Create a new configuration](Go3D-gemP.png)
|
||||||
|
|
||||||
|
I'll call it `dnsConfig` and put it in my `CustomProvisioning` folder.
|
||||||
|
![Giving it a name](fJswso9KH.png)
|
||||||
|
|
||||||
|
And then I create the following variables:
|
||||||
|
|
||||||
|
| Variable | Value | Type |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| `sshHost` | `win02.lab.bowdre.net` | string |
|
||||||
|
| `sshUser` | `vra` | string |
|
||||||
|
| `sshPass` | `*****` | secureString |
|
||||||
|
| `dnsServer` | `[win01.lab.bowdre.net]` | Array/string |
|
||||||
|
| `supportedDomains` | `[lab.bowdre.net]` | Array/string |
|
||||||
|
|
||||||
|
`sshHost` is my new `win02` server that I'm going to connect to via SSH, and `sshUser` and `sshPass` should explain themselves. The `dnsServer` array will tell the script which DNS servers to try to create the record on; this will just be a single server in my lab, but I'm going to construct the script to support multiple servers in case one isn't reachable. And `supported domains` will be used to restrict where I'll be creating records; again, that's just a single domain in my lab, but I'm building this solution to account for the possibility where a VM might need to be deployed on a domain where I can't create a static record in this way so I want it to fail elegantly.
|
||||||
|
|
||||||
|
Here's what the new configuration element looks like:
|
||||||
|
![Variables defined](a5gtUrQbc.png)
|
||||||
|
|
||||||
|
#### Workflow to create records
|
||||||
|
I'll need to tell my workflow about the variables held in the `dnsConfig` Configuration Element I just created. I do that by opening the "VM Post-Provisioning" workflow in the vRO UI, clicking the **Edit** button, and then switching to the **Variables** tab. I create a variable for each member of `dnsConfig`, and enable the toggle to *Bind to configuration* so that I can select the corresponding item. It's important to make sure that the variable type exactly matches what's in the configuration element so that you'll be able to pick it!
|
||||||
|
![Linking variable to config element](20210809_creating_bound_variable.png)
|
||||||
|
|
||||||
|
I repeat that for each of the remaining variables until all the members of `dnsConfig` are represented in the workflow:
|
||||||
|
![Variables added](20210809_variables_added.png)
|
||||||
|
|
||||||
|
Now we're ready for the good part: inserting a new scriptable task into the workflow schema. I'll called it `Create DNS Record` and place it directly after the `Set Notes` task. For inputs, the task will take in `inputProperties (Properties)` as well as everything from that `dnsConfig` configuration element:
|
||||||
|
![Task inputs](20210809_task_inputs.png)
|
||||||
|
|
||||||
|
And here's the JavaScript for the task:
|
||||||
|
```js
|
||||||
|
// JavaScript: Create DNS Record task
|
||||||
|
// Inputs: inputProperties (Properties), dnsServers (Array/string), sshHost (string), sshUser (string), sshPass (secureString), supportedDomains (Array/string)
|
||||||
|
// Outputs: None
|
||||||
|
|
||||||
|
var staticDns = inputProperties.customProperties.staticDns;
|
||||||
|
var hostname = inputProperties.resourceNames[0];
|
||||||
|
var dnsDomain = inputProperties.customProperties.dnsDomain;
|
||||||
|
var ipAddress = inputProperties.addresses[0];
|
||||||
|
var created = false;
|
||||||
|
|
||||||
|
// check if user requested a record to be created and if the VM's dnsDomain is in the supportedDomains array
|
||||||
|
if (staticDns == "true" && supportedDomains.indexOf(dnsDomain) >= 0) {
|
||||||
|
System.log("Attempting to create DNS record for "+hostname+"."+dnsDomain+" at "+ipAddress+"...")
|
||||||
|
// create the ssh session to the intermediary host
|
||||||
|
var sshSession = new SSHSession(sshHost, sshUser);
|
||||||
|
System.debug("Connecting to "+sshHost+"...")
|
||||||
|
sshSession.connectWithPassword(sshPass)
|
||||||
|
// loop through DNS servers in case the first one doesn't respond
|
||||||
|
for each (var dnsServer in dnsServers) {
|
||||||
|
if (created == false) {
|
||||||
|
System.debug("Using DNS Server "+dnsServer+"...")
|
||||||
|
// insert the PowerShell command to create A record
|
||||||
|
var sshCommand = 'Add-DnsServerResourceRecordA -ComputerName '+dnsServer+' -ZoneName '+dnsDomain+' -Name '+hostname+' -AllowUpdateAny -IPv4Address '+ipAddress;
|
||||||
|
System.debug("sshCommand: "+sshCommand)
|
||||||
|
// run the command and check the result
|
||||||
|
sshSession.executeCommand(sshCommand, true)
|
||||||
|
var result = sshSession.exitCode;
|
||||||
|
if (result == 0) {
|
||||||
|
System.log("Successfully created DNS record!")
|
||||||
|
// make a note that it was successful so we don't repeat this unnecessarily
|
||||||
|
created = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
sshSession.disconnect()
|
||||||
|
if (created == false) {
|
||||||
|
System.warn("Error! Unable to create DNS record.")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
System.log("Not trying to do DNS")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Now I can just save the workflow, and I'm done! - with this part. Of course, being able to *create* a static record is just one half of the fight; I also need to make sure that vRA will be able to clean up these static records when a deployment gets deleted.
|
||||||
|
|
||||||
|
#### Workflow to delete records
|
||||||
|
I haven't previously created any workflows that fire on deployment removal, so I'll create a new one and call it `VM Deprovisioning`:
|
||||||
|
![New workflow](20210811_new_workflow.png)
|
||||||
|
|
||||||
|
This workflow only needs a single input (`inputProperties (Properties)`) so it can receive information about the deployment from vRA:
|
||||||
|
![Workflow input](20210811_inputproperties.png)
|
||||||
|
|
||||||
|
I'll also need to bind in the variables from the `dnsConfig` element as before:
|
||||||
|
![Workflow variables](20210812_deprovision_variables.png)
|
||||||
|
|
||||||
|
The schema will include a single scriptable task:
|
||||||
|
![Delete DNS Record task](20210812_delete_dns_record_task.png)
|
||||||
|
|
||||||
|
And it's going to be *pretty damn similar* to the other one:
|
||||||
|
|
||||||
|
```js
|
||||||
|
// JavaScript: Delete DNS Record task
|
||||||
|
// Inputs: inputProperties (Properties), dnsServers (Array/string), sshHost (string), sshUser (string), sshPass (secureString), supportedDomains (Array/string)
|
||||||
|
// Outputs: None
|
||||||
|
|
||||||
|
var staticDns = inputProperties.customProperties.staticDns;
|
||||||
|
var hostname = inputProperties.resourceNames[0];
|
||||||
|
var dnsDomain = inputProperties.customProperties.dnsDomain;
|
||||||
|
var ipAddress = inputProperties.addresses[0];
|
||||||
|
var deleted = false;
|
||||||
|
|
||||||
|
// check if user requested a record to be created and if the VM's dnsDomain is in the supportedDomains array
|
||||||
|
if (staticDns == "true" && supportedDomains.indexOf(dnsDomain) >= 0) {
|
||||||
|
System.log("Attempting to remove DNS record for "+hostname+"."+dnsDomain+" at "+ipAddress+"...")
|
||||||
|
// create the ssh session to the intermediary host
|
||||||
|
var sshSession = new SSHSession(sshHost, sshUser);
|
||||||
|
System.debug("Connecting to "+sshHost+"...")
|
||||||
|
sshSession.connectWithPassword(sshPass)
|
||||||
|
// loop through DNS servers in case the first one doesn't respond
|
||||||
|
for each (var dnsServer in dnsServers) {
|
||||||
|
if (deleted == false) {
|
||||||
|
System.debug("Using DNS Server "+dnsServer+"...")
|
||||||
|
// insert the PowerShell command to delete A record
|
||||||
|
var sshCommand = 'Remove-DnsServerResourceRecord -ComputerName '+dnsServer+' -ZoneName '+dnsDomain+' -RRType A -Name '+hostname+' -Force';
|
||||||
|
System.debug("sshCommand: "+sshCommand)
|
||||||
|
// run the command and check the result
|
||||||
|
sshSession.executeCommand(sshCommand, true)
|
||||||
|
var result = sshSession.exitCode;
|
||||||
|
if (result == 0) {
|
||||||
|
System.log("Successfully deleted DNS record!")
|
||||||
|
// make a note that it was successful so we don't repeat this unnecessarily
|
||||||
|
deleted = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
sshSession.disconnect()
|
||||||
|
if (deleted == false) {
|
||||||
|
System.warn("Error! Unable to delete DNS record.")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
System.log("No need to clean up DNS.")
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Since this is a new workflow, I'll also need to head back to **Cloud Assembly > Extensibility > Subscriptions** and add a new subscription to call it when a deployment gets deleted. I'll call it "VM Deprovisioning", assign it to the "Compute Post Removal" Event Topic, and link it to my new "VM Deprovisioning" workflow. I *could* use the Condition option to filter this only for deployments which had a static DNS record created, but I'll later want to use this same workflow for other cleanup tasks so I'll just save it as is for now.
|
||||||
|
![VM Deprovisioning subscription](20210812_deprovisioning_subscription.png)
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
Now I can (finally) fire off a quick deployment to see if all this mess actually works:
|
||||||
|
![Test deploy request](20210812_test_deploy_request.png)
|
||||||
|
|
||||||
|
Once the deployment completes, I go back into vRO, find the most recent item in the **Workflow Runs** view, and click over to the **Logs** tab to see how I did:
|
||||||
|
![Workflow success!](20210813_workflow_success.png)
|
||||||
|
|
||||||
|
And I can run a quick query to make sure that name actually resolves:
|
||||||
|
```shell
|
||||||
|
❯ dig +short bow-ttst-xxx023.lab.bowdre.net A
|
||||||
|
172.16.30.10
|
||||||
|
```
|
||||||
|
|
||||||
|
It works!
|
||||||
|
|
||||||
|
Now to test the cleanup. For that, I'll head back to Service Broker, navigate to the **Deployments** tab, find my deployment, click the little three-dot menu button, and select the **Delete** option:
|
||||||
|
![Deleting the deployment](20210813_delete_deployment.png)
|
||||||
|
|
||||||
|
Again, I'll check the **Workflow Runs** in vRO to see that the deprovisioning task completed successfully:
|
||||||
|
![VM Deprovisioning workflow](20210813_workflow_deletion.png)
|
||||||
|
|
||||||
|
And I can `dig` a little more to make sure the name doesn't resolve anymore:
|
||||||
|
```shell
|
||||||
|
❯ dig +short bow-ttst-xxx023.lab.bowdre.net A
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
It *really* works!
|
||||||
|
|
||||||
|
### Conclusion
|
||||||
|
So there you have it - how I've got vRA/vRO able to create and delete static DNS records as needed, using a Windows SSH host as an intermediary. Cool, right?
|
BIN
content/post/docker-on-windows-10-with-wsl2/8p-PSHx1R.png
Normal file
After Width: | Height: | Size: 136 KiB |
83
content/post/docker-on-windows-10-with-wsl2/index.md
Normal file
|
@ -0,0 +1,83 @@
|
||||||
|
---
|
||||||
|
date: "2020-09-22T08:34:30Z"
|
||||||
|
thumbnail: 8p-PSHx1R.png
|
||||||
|
usePageBundles: true
|
||||||
|
tags:
|
||||||
|
- docker
|
||||||
|
- windows
|
||||||
|
- wsl
|
||||||
|
- containers
|
||||||
|
title: Docker on Windows 10 with WSL2
|
||||||
|
---
|
||||||
|
|
||||||
|
Microsoft's Windows Subsystem for Linux (WSL) 2 [was recently updated](https://devblogs.microsoft.com/commandline/wsl-2-support-is-coming-to-windows-10-versions-1903-and-1909/) to bring support for less-bleeding-edge Windows 10 versions (like 1903 and 1909). WSL2 is a big improvement over the first iteration (particularly with [better Docker support](https://www.docker.com/blog/docker-desktop-wsl-2-backport-update/)) so I was really looking forward to getting WSL2 loaded up on my work laptop.
|
||||||
|
|
||||||
|
Here's how.
|
||||||
|
|
||||||
|
### WSL2
|
||||||
|
|
||||||
|
#### Step Zero: Prereqs
|
||||||
|
You'll need Windows 10 1903 build 18362 or newer (on x64). You can check by running `ver` from a Command Prompt:
|
||||||
|
```powershell
|
||||||
|
C:\> ver
|
||||||
|
Microsoft Windows [Version 10.0.18363.1082]
|
||||||
|
```
|
||||||
|
We're interested in that third set of numbers. 18363 is bigger than 18362 so we're good to go!
|
||||||
|
|
||||||
|
#### Step One: Enable the WSL feature
|
||||||
|
*(Not needed if you've already been using WSL1.)*
|
||||||
|
You can do this by dropping the following into an elevated Powershell prompt:
|
||||||
|
```powershell
|
||||||
|
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step Two: Enable the Virtual Machine Platform feature
|
||||||
|
Drop this in an elevated Powershell:
|
||||||
|
```powershell
|
||||||
|
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
|
||||||
|
```
|
||||||
|
And then reboot (this is still Windows, after all).
|
||||||
|
|
||||||
|
#### Step Three: Install the WSL2 kernel update package
|
||||||
|
Download it from [here](https://wslstorestorage.blob.core.windows.net/wslblob/wsl_update_x64.msi), and double-click the downloaded file to install it.
|
||||||
|
|
||||||
|
#### Step Four: Set WSL2 as your default
|
||||||
|
Open a Powershell window and run:
|
||||||
|
```powershell
|
||||||
|
wsl --set-default-version 2
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Step Five: Install a Linux distro, or upgrade an existing one
|
||||||
|
If you're brand new to this WSL thing, head over to the [Microsoft Store](https://aka.ms/wslstore) and download your favorite Linux distribution. Once it's installed, launch it and you'll be prompted to set up a Linux username and password.
|
||||||
|
|
||||||
|
If you've already got a WSL1 distro installed, first run `wsl -l -v` in Powershell to make sure you know the distro name:
|
||||||
|
```powershell
|
||||||
|
PS C:\Users\jbowdre> wsl -l -v
|
||||||
|
NAME STATE VERSION
|
||||||
|
* Debian Running 2
|
||||||
|
```
|
||||||
|
And then upgrade the distro to WSL2 with `wsl --set-version <distro_name> 2`:
|
||||||
|
```powershell
|
||||||
|
PS C:\Users\jbowdre> wsl --set-version Debian 2
|
||||||
|
Conversion in progress, this may take a few minutes...
|
||||||
|
```
|
||||||
|
Cool!
|
||||||
|
|
||||||
|
### Docker
|
||||||
|
#### Step One: Download
|
||||||
|
Download Docker Desktop for Windows from [here](https://hub.docker.com/editions/community/docker-ce-desktop-windows/), making sure to grab the "Edge" version since it includes support for the backported WSL2 bits.
|
||||||
|
|
||||||
|
#### Step Two: Install
|
||||||
|
Run the installer, and make sure to tick the box for installing the WSL2 engine.
|
||||||
|
|
||||||
|
#### Step Three: Configure Docker Desktop
|
||||||
|
Launch Docker Desktop from the Start menu, and you should be presented with this friendly prompt:
|
||||||
|
![Great news! We're supported.](lY2FTflbK.png)
|
||||||
|
|
||||||
|
Hit that big friendly "gimme WSL2" button. Then open the Docker Settings from the system tray, and make sure that **General > Use the WSL 2 based engine** is enabled. Now navigate to **Resources > WSL Integration**, confirm that **Enable integration with my default WSL distro** is enabled as well. Smash the "Apply & Restart" button if you've made any changes.
|
||||||
|
|
||||||
|
### Test it!
|
||||||
|
Fire up a WSL session and confirm that everything is working with `docker run hello-world`:
|
||||||
|
![Hello, world!](8p-PSHx1R.png)
|
||||||
|
|
||||||
|
It's beautiful!
|
BIN
content/post/docker-on-windows-10-with-wsl2/lY2FTflbK.png
Normal file
After Width: | Height: | Size: 55 KiB |
|
@ -0,0 +1,67 @@
|
||||||
|
---
|
||||||
|
title: "Enable Tanzu CLI Auto-Completion in bash and zsh" # Title of the blog post.
|
||||||
|
date: 2022-02-01T08:34:47-06:00 # Date of post creation.
|
||||||
|
# lastmod: 2022-02-01T08:34:47-06:00 # Date when last modified
|
||||||
|
description: "How to configure your Linux shell to help you do the Tanzu" # Description used for search engine.
|
||||||
|
featured: false # Sets if post is a featured post, making appear on the home page side bar.
|
||||||
|
draft: false # Sets whether to render this page. Draft of true will not be rendered.
|
||||||
|
toc: false # Controls if a table of contents should be generated for first-level links automatically.
|
||||||
|
usePageBundles: true
|
||||||
|
# menu: main
|
||||||
|
# featureImage: "tanzu-completion.png" # Sets featured image on blog post.
|
||||||
|
# featureImageAlt: 'Description of image' # Alternative text for featured image.
|
||||||
|
# featureImageCap: 'This is the featured image.' # Caption (optional).
|
||||||
|
thumbnail: "tanzu-completion.png" # Sets thumbnail image appearing inside card on homepage.
|
||||||
|
# shareImage: "share.png" # Designate a separate image for social media sharing.
|
||||||
|
codeLineNumbers: false # Override global value for showing of line numbers within code block.
|
||||||
|
series: Tips
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- linux
|
||||||
|
- tanzu
|
||||||
|
- kubernetes
|
||||||
|
- shell
|
||||||
|
comment: true # Disable comment if false.
|
||||||
|
---
|
||||||
|
|
||||||
|
Lately I've been spending some time [getting more familiar](/tanzu-community-edition-k8s-homelab/) with VMware's [Tanzu Community Edition](https://tanzucommunityedition.io/) Kubernetes distribution, but I'm still not quite familiar enough with the `tanzu` command line. If only there were a better way for me to discover the available commands for a given context and help me type them correctly...
|
||||||
|
|
||||||
|
Oh, but there is! You see, one of the available Tanzu commands is `tanzu completion [shell]`, which will spit out the necessary code to generate handy context-based auto-completions appropriate for the shell of your choosing (provided that you choose either `bash` or `zsh`, that is).
|
||||||
|
|
||||||
|
Running `tanzu completion --help` will tell you what's needed, and you can just copy/paste the commands appropriate for your shell:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
# Bash instructions:
|
||||||
|
|
||||||
|
## Load only for current session:
|
||||||
|
source <(tanzu completion bash)
|
||||||
|
|
||||||
|
## Load for all new sessions:
|
||||||
|
tanzu completion bash > $HOME/.tanzu/completion.bash.inc
|
||||||
|
printf "\n# Tanzu shell completion\nsource '$HOME/.tanzu/completion.bash.inc'\n" >> $HOME/.bash_profile
|
||||||
|
|
||||||
|
# Zsh instructions:
|
||||||
|
|
||||||
|
## Load only for current session:
|
||||||
|
source <(tanzu completion zsh)
|
||||||
|
|
||||||
|
## Load for all new sessions:
|
||||||
|
echo "autoload -U compinit; compinit" >> ~/.zshrc
|
||||||
|
tanzu completion zsh > "${fpath[1]}/_tanzu"
|
||||||
|
```
|
||||||
|
|
||||||
|
So to get the completions to load automatically whenever you start a `bash` shell, run:
|
||||||
|
```shell
|
||||||
|
tanzu completion bash > $HOME/.tanzu/completion.bash.inc
|
||||||
|
printf "\n# Tanzu shell completion\nsource '$HOME/.tanzu/completion.bash.inc'\n" >> $HOME/.bash_profile
|
||||||
|
```
|
||||||
|
|
||||||
|
For a `zsh` shell, it's:
|
||||||
|
```shell
|
||||||
|
echo "autoload -U compinit; compinit" >> ~/.zshrc
|
||||||
|
tanzu completion zsh > "${fpath[1]}/_tanzu"
|
||||||
|
```
|
||||||
|
|
||||||
|
And that's it! The next time you open a shell (or `source` your relevant profile), you'll be able to `[TAB]` your way through the Tanzu CLI!
|
||||||
|
|
||||||
|
![Tanzu CLI completion in zsh](tanzu-completion.gif)
|
After Width: | Height: | Size: 334 KiB |
After Width: | Height: | Size: 120 KiB |
BIN
content/post/esxi-arm-on-quartz64/add_host.png
Normal file
After Width: | Height: | Size: 47 KiB |
BIN
content/post/esxi-arm-on-quartz64/add_host_confirm.png
Normal file
After Width: | Height: | Size: 112 KiB |
BIN
content/post/esxi-arm-on-quartz64/advertised_subnets.png
Normal file
After Width: | Height: | Size: 122 KiB |
BIN
content/post/esxi-arm-on-quartz64/beagle_term_settings.png
Normal file
After Width: | Height: | Size: 41 KiB |
BIN
content/post/esxi-arm-on-quartz64/bios.png
Normal file
After Width: | Height: | Size: 75 KiB |
BIN
content/post/esxi-arm-on-quartz64/console_connection.jpg
Normal file
After Width: | Height: | Size: 80 KiB |
BIN
content/post/esxi-arm-on-quartz64/correct_time.png
Normal file
After Width: | Height: | Size: 61 KiB |
BIN
content/post/esxi-arm-on-quartz64/dcui.png
Normal file
After Width: | Height: | Size: 103 KiB |
BIN
content/post/esxi-arm-on-quartz64/dcui_dns.png
Normal file
After Width: | Height: | Size: 124 KiB |
BIN
content/post/esxi-arm-on-quartz64/dcui_ip_address.png
Normal file
After Width: | Height: | Size: 139 KiB |
BIN
content/post/esxi-arm-on-quartz64/dcui_system_customization.png
Normal file
After Width: | Height: | Size: 116 KiB |
BIN
content/post/esxi-arm-on-quartz64/deploy_from_url.png
Normal file
After Width: | Height: | Size: 66 KiB |
BIN
content/post/esxi-arm-on-quartz64/disabling_subnet_on_vyos.png
Normal file
After Width: | Height: | Size: 62 KiB |
BIN
content/post/esxi-arm-on-quartz64/embedded_host_client_login.png
Normal file
After Width: | Height: | Size: 125 KiB |
After Width: | Height: | Size: 350 KiB |
BIN
content/post/esxi-arm-on-quartz64/enabling_subnet_on_pho01.png
Normal file
After Width: | Height: | Size: 51 KiB |
BIN
content/post/esxi-arm-on-quartz64/enclosure.jpg
Normal file
After Width: | Height: | Size: 76 KiB |
BIN
content/post/esxi-arm-on-quartz64/esxi_install_1.png
Normal file
After Width: | Height: | Size: 107 KiB |
BIN
content/post/esxi-arm-on-quartz64/esxi_install_2.png
Normal file
After Width: | Height: | Size: 68 KiB |
BIN
content/post/esxi-arm-on-quartz64/esxi_install_3.png
Normal file
After Width: | Height: | Size: 35 KiB |
BIN
content/post/esxi-arm-on-quartz64/esxi_install_4.png
Normal file
After Width: | Height: | Size: 121 KiB |
BIN
content/post/esxi-arm-on-quartz64/first_login.png
Normal file
After Width: | Height: | Size: 74 KiB |
BIN
content/post/esxi-arm-on-quartz64/hooked_up.jpg
Normal file
After Width: | Height: | Size: 70 KiB |
BIN
content/post/esxi-arm-on-quartz64/host_added.png
Normal file
After Width: | Height: | Size: 228 KiB |
416
content/post/esxi-arm-on-quartz64/index.md
Normal file
|
@ -0,0 +1,416 @@
|
||||||
|
---
|
||||||
|
title: "ESXi ARM Edition on the Quartz64 SBC" # Title of the blog post.
|
||||||
|
date: 2022-04-23 # Date of post creation.
|
||||||
|
lastmod: 2022-12-14
|
||||||
|
description: "Getting started with the experimental ESXi Arm Edition fling to run a VMware hypervisor on the PINE64 Quartz64 single-board computer, and installing a Tailscale node on Photon OS to facilitate improved remote access to my home network." # Description used for search engine.
|
||||||
|
featured: true # Sets if post is a featured post, making appear on the home page side bar.
|
||||||
|
draft: false # Sets whether to render this page. Draft of true will not be rendered.
|
||||||
|
toc: true # Controls if a table of contents should be generated for first-level links automatically.
|
||||||
|
usePageBundles: true
|
||||||
|
# menu: main
|
||||||
|
featureImage: "quartz64.jpg" # Sets featured image on blog post.
|
||||||
|
# featureImageAlt: 'Description of image' # Alternative text for featured image.
|
||||||
|
# featureImageCap: 'This is the featured image.' # Caption (optional).
|
||||||
|
thumbnail: "quartz64.jpg" # Sets thumbnail image appearing inside card on homepage.
|
||||||
|
# shareImage: "share.png" # Designate a separate image for social media sharing.
|
||||||
|
codeLineNumbers: false # Override global value for showing of line numbers within code block.
|
||||||
|
series: Projects
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- linux
|
||||||
|
- chromeos
|
||||||
|
- homelab
|
||||||
|
- tailscale
|
||||||
|
- photon
|
||||||
|
- vpn
|
||||||
|
comment: true # Disable comment if false.
|
||||||
|
---
|
||||||
|
{{% notice info "ESXi-ARM Fling v1.10 Update" %}}
|
||||||
|
On July 20, 2022, VMware released a [major update](https://blogs.vmware.com/arm/2022/07/20/1-10/) for the ESXi-ARM Fling. Among [other fixes and improvements](https://flings.vmware.com/esxi-arm-edition#changelog), this version enables **in-place ESXi upgrades** and [adds support for the Quartz64's **on-board NIC**](https://twitter.com/jmcwhatever/status/1549935971822706688). To update, I:
|
||||||
|
1. Wrote the new ISO installer to another USB drive.
|
||||||
|
2. Attached the installer drive to the USB hub, next to the existing ESXi drive.
|
||||||
|
3. Booted the installer and selected to upgrade ESXi on the existing device.
|
||||||
|
4. Powered-off post-install, unplugged the hub, and attached the ESXi drive directly to the USB2 port on the Quart64.
|
||||||
|
5. Connected the ethernet cable to the onboard NIC.
|
||||||
|
6. Booted to ESXi.
|
||||||
|
7. Once booted, I used the DCUI to (re)configure the management network and activate the onboard network adapter.
|
||||||
|
|
||||||
|
Now I've got directly-attached USB storage, and the onboard NIC provides gigabit connectivity. I've made a few tweaks to the rest of the article to reflect the lifting of those previous limitations.
|
||||||
|
{{% /notice %}}
|
||||||
|
|
||||||
|
Up until this point, [my homelab](/vmware-home-lab-on-intel-nuc-9/) has consisted of just a single Intel NUC9 ESXi host running a bunch of VMs. It's served me well but lately I've been thinking that it would be good to have an additional host for some of my workloads. In particular, I'd like to have a [Tailscale node](/secure-networking-made-simple-with-tailscale/) on my home network which _isn't_ hosted on the NUC so that I can patch ESXi remotely without cutting off my access. I appreciate the small footprint of the NUC so I'm not really interested in a large "grown-up" server at this time. So for now I thought it might be fun to experiment with [VMware's ESXi on ARM fling](https://flings.vmware.com/esxi-arm-edition) which makes it possible to run a full-fledged VMWare hypervisor on a Raspbery Pi.
|
||||||
|
|
||||||
|
Of course, I decided to embark upon this project at a time when Raspberry Pis are basically impossible to get. So instead I picked up a [PINE64 Quartz64](https://wiki.pine64.org/wiki/Quartz64) single-board computer (SBC) which seems like a potentially very-capable piece of hardware.... but there is a prominent warning at the bottom of the [store page](https://pine64.com/product/quartz64-model-a-8gb-single-board-computer/):
|
||||||
|
|
||||||
|
{{% notice warning "Be Advised" %}}
|
||||||
|
"The Quartz64 SBC still in early development stage, only suitable for developers and advanced users wishing to contribute to early software development. Both mainline and Rockchip’s BSP fork of Linux have already been booted on the platform and development is proceeding quickly, but it will be months before end-users and industry partners can reliably deploy it. If you need a single board computer for a private or industrial application today, we encourage you to choose a different board from our existing lineup or wait a few months until Quartz64 software reaches a sufficient degree of maturity."
|
||||||
|
{{% /notice %}}
|
||||||
|
|
||||||
|
More specifically, for my use case there will be a number of limitations (at least for now - this SBC is still pretty new to the market so hopefully support will be improving further over time):
|
||||||
|
- ~~The onboard NIC is not supported by ESXi.~~[^v1.10]
|
||||||
|
- Onboard storage (via eMMC, eSATA, or PCIe) is not supported.
|
||||||
|
- The onboard microSD slot is only used for loading firmware on boot, not for any other storage.
|
||||||
|
- Only two (of the four) USB ports are documented to work reliably.
|
||||||
|
- Of the remaining two ports, the lower USB3 port [shouldn't be depended upon either](https://wiki.pine64.org/wiki/Quartz64_Development#Confirmed_Broken) so I'm really just stuck with a single USB2 interface ~~which will need to handle both networking and storage~~[^v1.10].[^usb3]
|
||||||
|
|
||||||
|
All that is to say that (as usual) I'll be embarking upon this project in Hard Mode - and I'll make it extra challenging (as usual) by doing all of the work from a Chromebook. In any case, here's how I managed to get ESXi running on the the Quartz64 SBC and then deploy a small workload.
|
||||||
|
|
||||||
|
[^usb3]: Jared McNeill, the maintainer of the firmware image I'm using *just* [pushed a commit](https://github.com/jaredmcneill/quartz64_uefi/commit/4bda76e9fce5ed153ac49fa9d51ff34e5dd56d52) which sounds like it may address this flaky USB3 issue but that was after I had gotten everything else working as described below. I'll check that out once a new release gets published.
|
||||||
|
|
||||||
|
[^v1.10]: Fixed in the v1.10 release.
|
||||||
|
### Bill of Materials
|
||||||
|
Let's start with the gear (hardware and software) I needed to make this work:
|
||||||
|
|
||||||
|
| Hardware | Purpose |
|
||||||
|
| --- | --- |
|
||||||
|
| [PINE64 Quartz64 Model-A 8GB Single Board Computer](https://pine64.com/product/quartz64-model-a-8gb-single-board-computer/) | kind of the whole point |
|
||||||
|
| [ROCKPro64 12V 5A US Power Supply](https://pine64.com/product/rockpro64-12v-5a-us-power-supply/) | provies power for the the SBC |
|
||||||
|
| [Serial Console “Woodpecker” Edition](https://pine64.com/product/serial-console-woodpecker-edition/) | allows for serial console access |
|
||||||
|
| [Google USB-C Adapter](https://www.amazon.com/dp/B071G6NLHJ/) | connects the console adapter to my Chromebook |
|
||||||
|
| [Sandisk 64GB Micro SD Memory Card](https://www.amazon.com/dp/B00M55C1I2) | only holds the firmware; a much smaller size would be fine |
|
||||||
|
| [Monoprice USB-C MicroSD Reader](https://www.amazon.com/dp/B00YQM8352/) | to write firmware to the SD card from my Chromebook |
|
||||||
|
| [Samsung MUF-256AB/AM FIT Plus 256GB USB 3.1 Drive](https://www.amazon.com/dp/B07D7Q41PM) | ESXi boot device and local VMFS datastore |
|
||||||
|
| ~~[Cable Matters 3 Port USB 3.0 Hub with Ethernet](https://www.amazon.com/gp/product/B01J6583NK)~~ | ~~for network connectivity and to host the above USB drive~~[^v1.10] |
|
||||||
|
| [3D-printed open enclosure for QUARTZ64](https://www.thingiverse.com/thing:5308499) | protect the board a little bit while allowing for plenty of passive airflow |
|
||||||
|
|
||||||
|
| Downloads | Purpose |
|
||||||
|
| --- | --- |
|
||||||
|
| [ESXi ARM Edition](https://customerconnect.vmware.com/downloads/get-download?downloadGroup=ESXI-ARM) (v1.10) | hypervisor |
|
||||||
|
| [Tianocore EDK II firmware for Quartz64](https://github.com/jaredmcneill/quartz64_uefi/releases) (2022-07-20) | firmare image |
|
||||||
|
| [Chromebook Recovery Utility](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm) | easy way to write filesystem images to external media |
|
||||||
|
| [Beagle Term](https://chrome.google.com/webstore/detail/beagle-term/gkdofhllgfohlddimiiildbgoggdpoea) | for accessing the Quartz64 serial console |
|
||||||
|
|
||||||
|
### Preparation
|
||||||
|
#### Firmware media
|
||||||
|
The very first task is to write the required firmware image (download [here](https://github.com/jaredmcneill/quartz64_uefi/releases)) to a micro SD card. I used a 64GB card that I had lying around but you could easily get by with a *much* smaller one; the firmware image is tiny, and the card can't be used for storing anything else. Since I'm doing this on a Chromebook, I'll be using the [Chromebook Recovery Utility (CRU)](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm) for writing the images to external storage as described [in another post](/burn-an-iso-to-usb-with-the-chromebook-recovery-utility/).
|
||||||
|
|
||||||
|
After downloading [`QUARTZ64_EFI.img.gz`](https://github.com/jaredmcneill/quartz64_uefi/releases/download/2022-07-20/QUARTZ64_EFI.img.gz), I need to get it into a format recognized by CRU and, in this case, that means extracting the gzipped archive and then compressing the `.img` file into a standard `.zip`:
|
||||||
|
```
|
||||||
|
gunzip QUARTZ64_EFI.img.gz
|
||||||
|
zip QUARTZ64_EFI.img.zip QUARTZ64_EFI.img
|
||||||
|
```
|
||||||
|
|
||||||
|
I can then write it to the micro SD card by opening CRU, clicking on the gear icon, and selecting the *Use local image* option.
|
||||||
|
|
||||||
|
![Writing the firmware image](writing_firmware.png)
|
||||||
|
|
||||||
|
#### ESXi installation media
|
||||||
|
I'll also need to prepare the ESXi installation media (download [here](https://customerconnect.vmware.com/downloads/get-download?downloadGroup=ESXI-ARM)). For that, I'll be using a 256GB USB drive. Due to the limited storage options on the Quartz64, I'll be installing ESXi onto the same drive I use to boot the installer so, in this case, the more storage the better. By default, ESXi 7.0 will consume up to 128GB for the new `ESX-OSData` partition; whatever is leftover will be made available as a VMFS datastore. That could be problematic given the unavailable/flaky USB support of the Quartz64. (While you *can* install ESXi onto a smaller drive, down to about ~20GB, the lack of additional storage on this hardware makes it pretty important to take advantage of as much space as you can.)
|
||||||
|
|
||||||
|
In any case, to make the downloaded `VMware-VMvisor-Installer-7.0-20133114.aarch64.iso` writeable with CRU all I need to do is add `.bin` to the end of the filename:
|
||||||
|
```
|
||||||
|
mv VMware-VMvisor-Installer-7.0-20133114.aarch64.iso{,.bin}
|
||||||
|
```
|
||||||
|
|
||||||
|
Then it's time to write the image onto the USB drive:
|
||||||
|
![Writing the ESXi installer image](writing_esxi.png)
|
||||||
|
|
||||||
|
|
||||||
|
#### Console connection
|
||||||
|
I'll need to use the Quartz64 serial console interface and ["Woodpecker" edition console USB adapter](https://pine64.com/product/serial-console-woodpecker-edition/) to interact with the board until I get ESXi installed and can connect to it with the web interface or SSH. The adapter comes with a short breakout cable, and I connect it thusly:
|
||||||
|
|
||||||
|
| Quartz64 GPIO pin | Console adapter pin | Wire color |
|
||||||
|
| --- | --- | --- |
|
||||||
|
| 6 | `GND` | Brown |
|
||||||
|
| 8 | `RXD` | Red |
|
||||||
|
| 10 | `TXD` | Orange |
|
||||||
|
|
||||||
|
I leave the yellow wire dangling free on both ends since I don't need a `+V` connection for the console to work.
|
||||||
|
![Console connection](console_connection.jpg)
|
||||||
|
|
||||||
|
To verify that I've got things working, I go ahead and pop the micro SD card containing the firmware into its slot on the bottom side of the Quartz64 board, connect the USB console adapter to my Chromebook, and open the [Beagle Term](https://chrome.google.com/webstore/detail/beagle-term/gkdofhllgfohlddimiiildbgoggdpoea) app to set up the serial connection.
|
||||||
|
|
||||||
|
I'll need to use these settings for the connection (which are the defaults selected by Beagle Term):
|
||||||
|
|
||||||
|
| Setting | Value |
|
||||||
|
| -- | --- |
|
||||||
|
| Port | `/dev/ttyUSB0` |
|
||||||
|
| Bitrate | `115200` |
|
||||||
|
| Data Bit | `8 bit` |
|
||||||
|
| Parity | `none` |
|
||||||
|
| Stop Bit | `1` |
|
||||||
|
| Flow Control | `none` |
|
||||||
|
|
||||||
|
![Beagle Term settings](beagle_term_settings.png)
|
||||||
|
|
||||||
|
I hit **Connect** and then connect the Quartz64's power supply. I watch as it loads the firmware and then launches the BIOS menu:
|
||||||
|
![BIOS menu](bios.png)
|
||||||
|
|
||||||
|
### Host creation
|
||||||
|
#### ESXi install
|
||||||
|
Now that I've got everything in order I can start the install. A lot of experimentation on my part confirmed the sad news about the USB ports: of the four USB ports, only the top-right USB2 port works reliably for me. So I connect my ~~USB NIC+hub to that port, and plug in my 256GB drive to the hub~~[^v1.10] 256GB USB drive there. This isn't ideal from a performance aspect, of course, but slow storage is more useful than no storage.
|
||||||
|
|
||||||
|
On that note, remember what I mentioned earlier about how the ESXi installer would want to fill up ~128GB worth of whatever drive it targets? The ESXi ARM instructions say that you can get around that by passing the `autoPartitionOSDataSize` advanced option to the installer by pressing `[Shift] + O` in the ESXi bootloader, but the Quartz64-specific instructions say that you can't do that with this board since only the serial console is available... It turns out this is a (happy) lie.
|
||||||
|
|
||||||
|
I hooked up a monitor to the board's HDMI port and a USB keyboard to a free port on the hub and verified that the keyboard let me maneuver through the BIOS menu. From here, I hit the **Reset** button on the Quartz64 to restart it and let it boot from the connected USB drive. When I got to the ESXi pre-boot countdown screen, I pressed `[Shift] + O` as instructed and added `autoPartitionOSDataSize=8192` to the boot options. This limits the size of the new-for-ESXi7 ESX-OSData VMFS-L volume to 8GB and will give me much more space for the local datastore.
|
||||||
|
|
||||||
|
Beyond that it's a fairly typical ESXi install process:
|
||||||
|
![Hi, welcome to the ESXi for ARM installer. I'll be your UI this evening.](esxi_install_1.png)
|
||||||
|
![Just to be sure, I'm going to clobber everything on this USB drive.](esxi_install_2.png)
|
||||||
|
![Hold on to your butts, here we go!](esxi_install_3.png)
|
||||||
|
![Whew, we made it!](esxi_install_4.png)
|
||||||
|
|
||||||
|
#### Initial configuration
|
||||||
|
After the installation completed, I rebooted the host and watched for the Direct Console User Interface (DCUI) to come up:
|
||||||
|
![ESXi DCUI](dcui.png)
|
||||||
|
|
||||||
|
I hit `[F2]` and logged in with the root credentials to get to the System Customization menu:
|
||||||
|
![DCUI System Customization](dcui_system_customization.png)
|
||||||
|
|
||||||
|
The host automatically received an IP issued by DHCP but I'd like for it to instead use a static IP. I'll also go ahead and configure the appropriate DNS settings.
|
||||||
|
![Setting the IP address](dcui_ip_address.png)
|
||||||
|
![Configuring DNS settings](dcui_dns.png)
|
||||||
|
|
||||||
|
I also create the appropriate matching `A` and `PTR` records in my local DNS, and (after bouncing the management network) I can access the ESXi Embedded Host Client at `https://quartzhost.lab.bowdre.net`:
|
||||||
|
![ESXi Embedded Host Client login screen](embedded_host_client_login.png)
|
||||||
|
![Summary view of my new host!](embedded_host_client_summary.png)
|
||||||
|
|
||||||
|
That's looking pretty good... but what's up with that date and time? Time has kind of lost all meaning in the last couple of years but I'm *reasonably* certain that January 1, 2001 was at least a few years ago. And I know from past experience that incorrect host time will prevent it from being successfully imported to a vCenter inventory.
|
||||||
|
|
||||||
|
Let's clear that up by enabling the Network Time Protocol (NTP) service on this host. I'll do that by going to **Manage > System > Time & Date** and clicking the **Edit NTP Settings** button. I don't run a local NTP server so I'll point it at `pool.ntp.org` and set the service to start and stop with the host:
|
||||||
|
![NTP configuration](ntp_configuration.png)
|
||||||
|
|
||||||
|
Now I hop over to the **Services** tab, select the `ntpd` service, and then click the **Start** button there. Once it's running, I then *restart* `ntpd` to help encourage the system to update the time immediately.
|
||||||
|
![Starting the NTP service](services.png)
|
||||||
|
|
||||||
|
Once the service is started I can go back to **Manage > System > Time & Date**, click the **Refresh** button, and confirm that the host has been updated with the correct time:
|
||||||
|
![Correct time!](correct_time.png)
|
||||||
|
|
||||||
|
With the time sorted, I'm just about ready to join this host to my vCenter, but first I'd like to take a look at the storage situation - after all, I did jump through those hoops with the installer to make sure that I would wind up with a useful local datastore. Upon going to **Storage > More storage > Devices** and clicking on the single listed storage device, I can see in the Partition Diagram that the ESX-OSData VMFS-L volume was indeed limited to 8GB, and the free space beyond that was automatically formatted as a VMFS datastore:
|
||||||
|
![Reviewing the partition diagram](storage_device.png)
|
||||||
|
|
||||||
|
And I can also take a peek at that local datastore:
|
||||||
|
![Local datastore](storage_datastore.png)
|
||||||
|
|
||||||
|
With 200+ gigabytes of free space on the datastore I should have ample room for a few lightweight VMs.
|
||||||
|
|
||||||
|
#### Adding to vCenter
|
||||||
|
Alright, let's go ahead and bring the new host into my vCenter environment. That starts off just like any other host, by right-clicking an inventory location in the *Hosts & Clusters* view and selecting **Add Host**.
|
||||||
|
![Starting the process](add_host.png)
|
||||||
|
|
||||||
|
![Reviewing the host details](add_host_confirm.png)
|
||||||
|
|
||||||
|
![Successfully added to the vCenter](host_added.png)
|
||||||
|
|
||||||
|
Success! I've now got a single-board hypervisor connected to my vCenter. Now let's give that host a workload.[^workloads]
|
||||||
|
|
||||||
|
[^workloads]: Hosts *love* workloads.
|
||||||
|
|
||||||
|
### Workload creation
|
||||||
|
As I mentioned earlier, my initial goal is to deploy a Tailscale node on my new host so that I can access my home network from outside of the single-host virtual lab environment. I've become a fan of using VMware's [Photon OS](https://vmware.github.io/photon/) so I'll get a VM deployed and then install the Tailscale agent.
|
||||||
|
|
||||||
|
#### Deploying Photon OS
|
||||||
|
VMware provides Photon in a few different formats, as described on the [download page](https://github.com/vmware/photon/wiki/Downloading-Photon-OS). I'm going to use the "OVA with virtual hardware v13 arm64" version so I'll kick off that download of `photon_uefi.ova`. I'm actually going to download that file straight to my `deb01` Linux VM:
|
||||||
|
```shell
|
||||||
|
wget https://packages.vmware.com/photon/4.0/Rev2/ova/photon_uefi.ova
|
||||||
|
```
|
||||||
|
and then spawn a quick Python web server to share it out:
|
||||||
|
```shell
|
||||||
|
❯ python3 -m http.server
|
||||||
|
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
|
||||||
|
```
|
||||||
|
|
||||||
|
That will let me deploy from a resource already inside my lab network instead of transferring the OVA from my laptop. So now I can go back to my vSphere Client and go through the steps to **Deploy OVF Template** to the new host, and I'll plug in the URL `http://deb01.lab.bowdre.net:8000/photon_uefi.ova`:
|
||||||
|
![Deploying a template from URL](deploy_from_url.png)
|
||||||
|
|
||||||
|
I'll name it `pho01` and drop it in an appropriate VM folder:
|
||||||
|
![Naming the new VM](name_vm.png)
|
||||||
|
|
||||||
|
And place it on the new Quartz64 host:
|
||||||
|
![Host placement](vm_placement.png)
|
||||||
|
|
||||||
|
The rest of the OVF deployment is basically just selecting the default options and clicking through to finish it. And then once it's deployed, I'll go ahead and power on the new VM.
|
||||||
|
![The newly-created Photon VM](new_vm.png)
|
||||||
|
|
||||||
|
#### Configuring Photon
|
||||||
|
There are just a few things I'll want to configure on this VM before I move on to installing Tailscale, and I'll start out simply by logging in with the remote console.
|
||||||
|
|
||||||
|
{{% notice info "Default credentials" %}}
|
||||||
|
The default password for Photon's `root` user is `changeme`. You'll be forced to change that at first login.
|
||||||
|
{{% /notice %}}
|
||||||
|
|
||||||
|
![First login, and the requisite password change](first_login.png)
|
||||||
|
|
||||||
|
Now that I'm in, I'll set the hostname appropriately:
|
||||||
|
```bash
|
||||||
|
hostnamectl set-hostname pho01
|
||||||
|
```
|
||||||
|
|
||||||
|
For now, the VM pulled an IP from DHCP but I would like to configure that statically instead. To do that, I'll create a new interface file:
|
||||||
|
```bash
|
||||||
|
cat > /etc/systemd/network/10-static-en.network << "EOF"
|
||||||
|
|
||||||
|
[Match]
|
||||||
|
Name = eth0
|
||||||
|
|
||||||
|
[Network]
|
||||||
|
Address = 192.168.1.17/24
|
||||||
|
Gateway = 192.168.1.1
|
||||||
|
DNS = 192.168.1.5
|
||||||
|
DHCP = no
|
||||||
|
IPForward = yes
|
||||||
|
|
||||||
|
EOF
|
||||||
|
|
||||||
|
chmod 644 /etc/systemd/network/10-static-en.network
|
||||||
|
systemctl restart systemd-networkd
|
||||||
|
```
|
||||||
|
|
||||||
|
I'm including `IPForward = yes` to [enable IP forwarding](https://tailscale.com/kb/1104/enable-ip-forwarding/) for Tailscale.
|
||||||
|
|
||||||
|
With networking sorted, it's probably a good idea to check for and apply any available updates:
|
||||||
|
```bash
|
||||||
|
tdnf update -y
|
||||||
|
```
|
||||||
|
|
||||||
|
I'll also go ahead and create a normal user account (with sudo privileges) for me to use:
|
||||||
|
```bash
|
||||||
|
useradd -G wheel -m john
|
||||||
|
passwd john
|
||||||
|
```
|
||||||
|
|
||||||
|
Now I can use SSH to connect to the VM and ditch the web console:
|
||||||
|
```bash
|
||||||
|
❯ ssh pho01.lab.bowdre.net
|
||||||
|
Password:
|
||||||
|
john@pho01 [ ~ ]$ sudo whoami
|
||||||
|
|
||||||
|
We trust you have received the usual lecture from the local System
|
||||||
|
Administrator. It usually boils down to these three things:
|
||||||
|
|
||||||
|
#1) Respect the privacy of others.
|
||||||
|
#2) Think before you type.
|
||||||
|
#3) With great power comes great responsibility.
|
||||||
|
|
||||||
|
[sudo] password for john
|
||||||
|
root
|
||||||
|
```
|
||||||
|
|
||||||
|
Looking good! I'll now move on to the justification[^justification] for this entire exercise:
|
||||||
|
|
||||||
|
[^justification]: Entirely arbitrary and fabricated justification.
|
||||||
|
#### Installing Tailscale
|
||||||
|
If I *weren't* doing this on hard mode, I could use Tailscale's [install script](https://tailscale.com/download) like I do on every other Linux system. Hard mode is what I do though, and the installer doesn't directly support Photon OS. I'll instead consult the [manual install instructions](https://tailscale.com/download/linux/static) which tell me to download the appropriate binaries from [https://pkgs.tailscale.com/stable/#static](https://pkgs.tailscale.com/stable/#static). So I'll grab the link for the latest `arm64` build and pull the down to the VM:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl https://pkgs.tailscale.com/stable/tailscale_1.22.2_arm64.tgz --output tailscale_arm64.tgz
|
||||||
|
```
|
||||||
|
|
||||||
|
Then I can unpack it:
|
||||||
|
```bash
|
||||||
|
sudo tdnf install tar
|
||||||
|
tar xvf tailscale_arm64.tgz
|
||||||
|
cd tailscale_1.22.2_arm64/
|
||||||
|
```
|
||||||
|
|
||||||
|
So I've got the `tailscale` and `tailscaled` binaries as well as some sample service configs in the `systemd` directory:
|
||||||
|
```bash
|
||||||
|
john@pho01 [ ~/tailscale_1.22.2_arm64 ]$
|
||||||
|
.:
|
||||||
|
total 32288
|
||||||
|
drwxr-x--- 2 john users 4096 Mar 18 02:44 systemd
|
||||||
|
-rwxr-x--- 1 john users 12187139 Mar 18 02:44 tailscale
|
||||||
|
-rwxr-x--- 1 john users 20866538 Mar 18 02:44 tailscaled
|
||||||
|
|
||||||
|
./systemd:
|
||||||
|
total 8
|
||||||
|
-rw-r----- 1 john users 287 Mar 18 02:44 tailscaled.defaults
|
||||||
|
-rw-r----- 1 john users 674 Mar 18 02:44 tailscaled.service
|
||||||
|
```
|
||||||
|
|
||||||
|
Dealing with the binaries is straight-forward. I'll drop them into `/usr/bin/` and `/usr/sbin/` (respectively) and set the file permissions:
|
||||||
|
```bash
|
||||||
|
sudo install -m 755 tailscale /usr/bin/
|
||||||
|
sudo install -m 755 tailscaled /usr/sbin/
|
||||||
|
```
|
||||||
|
|
||||||
|
Then I'll descend to the `systemd` folder and see what's up:
|
||||||
|
```bash
|
||||||
|
john@pho01 [ ~/tailscale_1.22.2_arm64/ ]$ cd systemd/
|
||||||
|
|
||||||
|
john@pho01 [ ~/tailscale_1.22.2_arm64/systemd ]$ cat tailscaled.defaults
|
||||||
|
# Set the port to listen on for incoming VPN packets.
|
||||||
|
# Remote nodes will automatically be informed about the new port number,
|
||||||
|
# but you might want to configure this in order to set external firewall
|
||||||
|
# settings.
|
||||||
|
PORT="41641"
|
||||||
|
|
||||||
|
# Extra flags you might want to pass to tailscaled.
|
||||||
|
FLAGS=""
|
||||||
|
|
||||||
|
john@pho01 [ ~/tailscale_1.22.2_arm64/systemd ]$ cat tailscaled.service
|
||||||
|
[Unit]
|
||||||
|
Description=Tailscale node agent
|
||||||
|
Documentation=https://tailscale.com/kb/
|
||||||
|
Wants=network-pre.target
|
||||||
|
After=network-pre.target NetworkManager.service systemd-resolved.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
EnvironmentFile=/etc/default/tailscaled
|
||||||
|
ExecStartPre=/usr/sbin/tailscaled --cleanup
|
||||||
|
ExecStart=/usr/sbin/tailscaled --state=/var/lib/tailscale/tailscaled.state --socket=/run/tailscale/tailscaled.sock --port $PORT $FLAGS
|
||||||
|
ExecStopPost=/usr/sbin/tailscaled --cleanup
|
||||||
|
|
||||||
|
Restart=on-failure
|
||||||
|
|
||||||
|
RuntimeDirectory=tailscale
|
||||||
|
RuntimeDirectoryMode=0755
|
||||||
|
StateDirectory=tailscale
|
||||||
|
StateDirectoryMode=0700
|
||||||
|
CacheDirectory=tailscale
|
||||||
|
CacheDirectoryMode=0750
|
||||||
|
Type=notify
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
```
|
||||||
|
|
||||||
|
`tailscaled.defaults` contains the default configuration that will be referenced by the service, and `tailscaled.service` tells me that it expects to find it at `/etc/defaults/tailscaled`. So I'll copy it there and set the perms:
|
||||||
|
```bash
|
||||||
|
sudo install -m 644 tailscaled.defaults /etc/defaults/tailscaled
|
||||||
|
```
|
||||||
|
|
||||||
|
`tailscaled.service` will get dropped in `/usr/lib/systemd/system/`:
|
||||||
|
```bash
|
||||||
|
sudo install -m 644 tailscaled.service /usr/lib/systemd/system/
|
||||||
|
```
|
||||||
|
|
||||||
|
Then I'll enable the service and start it:
|
||||||
|
```bash
|
||||||
|
sudo systemctl enable tailscaled.service
|
||||||
|
sudo systemctl start tailscaled.service
|
||||||
|
```
|
||||||
|
|
||||||
|
And finally log in to Tailscale, including my `tag:home` tag for [ACL purposes](/secure-networking-made-simple-with-tailscale/#acls) and a route advertisement for my home network so that my other Tailscale nodes can use this one to access other devices as well:
|
||||||
|
```bash
|
||||||
|
sudo tailscale up --advertise-tags "tag:home" --advertise-route "192.168.1.0/24"
|
||||||
|
```
|
||||||
|
|
||||||
|
That will return a URL I can use to authenticate, and I'll then able to to view and manage the new Tailscale node from the `login.tailscale.com` admin portal:
|
||||||
|
![Success!](new_tailscale_node.png)
|
||||||
|
|
||||||
|
You might remember [from last time](/secure-networking-made-simple-with-tailscale/#subnets-and-exit-nodes) that the "Subnets (!)" label indicates that this node is attempting to advertise a subnet route but that route hasn't yet been accepted through the admin portal. You may also remember that the `192.168.1.0/24` subnet is already being advertised by my `vyos` node:[^hassos]
|
||||||
|
![Actively-routed subnets show up black, while advertised-but-not-currently-routed subnets appear grey](advertised_subnets.png)
|
||||||
|
|
||||||
|
Things could potentially get messy if I have two nodes advertising routes for the same subnet[^failover] so I'm going to use the admin portal to disable that route on `vyos` before enabling it for `pho01`. I'll let `vyos` continue to route the `172.16.0.0/16` subnet (which only exists inside the NUC's vSphere environment after all) and it can continue to function as an Exit Node as well.
|
||||||
|
![Disabling the subnet on vyos](disabling_subnet_on_vyos.png)
|
||||||
|
|
||||||
|
![Enabling the subnet on pho01](enabling_subnet_on_pho01.png)
|
||||||
|
|
||||||
|
![Updated subnets](updated_subnets.png)
|
||||||
|
|
||||||
|
Now I can remotely access the VM (and thus my homelab!) from any of my other Tailscale-enrolled devices!
|
||||||
|
|
||||||
|
[^hassos]: The [Tailscale add-on for Home Assistant](https://github.com/hassio-addons/addon-tailscale) also tries to advertise its subnets by default, but I leave that disabled in the admin portal as well.
|
||||||
|
|
||||||
|
[^failover]: Tailscale does offer a [subnet router failover feature](https://tailscale.com/kb/1115/subnet-failover/) but it is only available starting on the [Business ($15/month) plan](https://tailscale.com/pricing/) and not the $48/year Personal Pro plan that I'm using.
|
||||||
|
|
||||||
|
|
||||||
|
### Conclusion
|
||||||
|
I actually received the Quartz64 waay back on March 2nd, and it's taken me until this week to get all the pieces in place and working the way I wanted.
|
||||||
|
{{< tweet user="johndotbowdre" id="1499194756148125701" >}}
|
||||||
|
|
||||||
|
As is so often the case, a lot of time and effort would have been saved if I had RTFM'd[^rtfm] before diving in to the deep end. I definitely hadn't anticipated all the limitations that would come with the Quartz64 SBC before ordering mine. Now that it's done, though, I'm pretty pleased with the setup, and I feel like I learned quite a bit along the way. I keep reminding myself that this is still a very new hardware platform. I'm excited to see how things improve with future development efforts.
|
||||||
|
|
||||||
|
[^rtfm]: Read The *Friendly* Manual. Yeah.
|
||||||
|
|
BIN
content/post/esxi-arm-on-quartz64/name_vm.png
Normal file
After Width: | Height: | Size: 80 KiB |
BIN
content/post/esxi-arm-on-quartz64/new_tailscale_node.png
Normal file
After Width: | Height: | Size: 65 KiB |
BIN
content/post/esxi-arm-on-quartz64/new_vm.png
Normal file
After Width: | Height: | Size: 159 KiB |
BIN
content/post/esxi-arm-on-quartz64/ntp_configuration.png
Normal file
After Width: | Height: | Size: 120 KiB |
BIN
content/post/esxi-arm-on-quartz64/quart64_sbc.jpg
Normal file
After Width: | Height: | Size: 106 KiB |
BIN
content/post/esxi-arm-on-quartz64/quartz64.jpg
Normal file
After Width: | Height: | Size: 230 KiB |
BIN
content/post/esxi-arm-on-quartz64/services.png
Normal file
After Width: | Height: | Size: 87 KiB |
BIN
content/post/esxi-arm-on-quartz64/storage_datastore.png
Normal file
After Width: | Height: | Size: 73 KiB |
BIN
content/post/esxi-arm-on-quartz64/storage_device.png
Normal file
After Width: | Height: | Size: 121 KiB |
BIN
content/post/esxi-arm-on-quartz64/updated_subnets.png
Normal file
After Width: | Height: | Size: 127 KiB |
BIN
content/post/esxi-arm-on-quartz64/vm_placement.png
Normal file
After Width: | Height: | Size: 32 KiB |
BIN
content/post/esxi-arm-on-quartz64/writing_esxi.png
Normal file
After Width: | Height: | Size: 91 KiB |
BIN
content/post/esxi-arm-on-quartz64/writing_firmware.png
Normal file
After Width: | Height: | Size: 85 KiB |
After Width: | Height: | Size: 160 KiB |
After Width: | Height: | Size: 153 KiB |
After Width: | Height: | Size: 162 KiB |
After Width: | Height: | Size: 79 KiB |
120
content/post/fixing-403-error-ssc-8-6-vra-idm/index.md
Normal file
|
@ -0,0 +1,120 @@
|
||||||
|
---
|
||||||
|
series: vRA8
|
||||||
|
date: "2021-11-05T00:00:00Z"
|
||||||
|
thumbnail: 20211105_ssc_403.png
|
||||||
|
usePageBundles: true
|
||||||
|
tags:
|
||||||
|
- vmware
|
||||||
|
- vra
|
||||||
|
- lcm
|
||||||
|
- salt
|
||||||
|
- openssl
|
||||||
|
- certs
|
||||||
|
title: Fixing 403 error on SaltStack Config 8.6 integrated with vRA and vIDM
|
||||||
|
---
|
||||||
|
I've been wanting to learn a bit more about [SaltStack Config](https://www.vmware.com/products/vrealize-automation/saltstack-config.html) so I recently deployed SSC 8.6 to my environment (using vRealize Suite Lifecycle Manager to do so as [described here](https://cosmin.gq/2021/02/02/deploying-saltstack-config-via-lifecycle-manager-in-a-vra-environment/)). I selected the option to integrate with my pre-existing vRA and vIDM instances so that I wouldn't have to manage authentication directly since I recall that the LDAP authentication piece was a little clumsy the last time I tried it.
|
||||||
|
|
||||||
|
### The Problem
|
||||||
|
Unfortunately I ran into a problem immediately after the deployment completed:
|
||||||
|
![403 error from SSC](20211105_ssc_403.png)
|
||||||
|
|
||||||
|
Instead of being redirected to the vIDM authentication screen, I get a 403 Forbidden error.
|
||||||
|
|
||||||
|
I used SSH to log in to the SSC appliance as `root`, and I found this in the `/var/log/raas/raas` log file:
|
||||||
|
```
|
||||||
|
2021-11-05 18:37:47,705 [var.lib.raas.unpack._MEIV8zDs3.raas.mods.vra.params ][ERROR :252 ][Webserver:6170] SSL Exception - https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery may be using a self-signed certificate HTTPSConnectionPool(host='vra.lab.bowdre.net', port=443): Max retries exceeded with url: /csp/gateway/am/api/auth/discovery?username=service_type&state=aHR0cHM6Ly9zc2MubGFiLmJvd2RyZS5uZXQvaWRlbnRpdHkvYXBpL2NvcmUvYXV0aG4vY3Nw&redirect_uri=https%3A%2F%2Fssc.lab.bowdre.net%2Fidentity%2Fapi%2Fcore%2Fauthn%2Fcsp&client_id=ssc-299XZv71So (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)')))
|
||||||
|
2021-11-05 18:37:47,928 [tornado.application ][ERROR :1792][Webserver:6170] Uncaught exception GET /csp/gateway/am/api/loggedin/user/profile (192.168.1.100)
|
||||||
|
HTTPServerRequest(protocol='https', host='ssc.lab.bowdre.net', method='GET', uri='/csp/gateway/am/api/loggedin/user/profile', version='HTTP/1.1', remote_ip='192.168.1.100')
|
||||||
|
Traceback (most recent call last):
|
||||||
|
File "urllib3/connectionpool.py", line 706, in urlopen
|
||||||
|
File "urllib3/connectionpool.py", line 382, in _make_request
|
||||||
|
File "urllib3/connectionpool.py", line 1010, in _validate_conn
|
||||||
|
File "urllib3/connection.py", line 421, in connect
|
||||||
|
File "urllib3/util/ssl_.py", line 429, in ssl_wrap_socket
|
||||||
|
File "urllib3/util/ssl_.py", line 472, in _ssl_wrap_socket_impl
|
||||||
|
File "ssl.py", line 423, in wrap_socket
|
||||||
|
File "ssl.py", line 870, in _create
|
||||||
|
File "ssl.py", line 1139, in do_handshake
|
||||||
|
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)
|
||||||
|
```
|
||||||
|
|
||||||
|
Further, attempting to pull down that URL with `curl` also failed:
|
||||||
|
```sh
|
||||||
|
root@ssc [ ~ ]# curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery
|
||||||
|
curl: (60) SSL certificate problem: self signed certificate in certificate chain
|
||||||
|
More details here: https://curl.se/docs/sslcerts.html
|
||||||
|
|
||||||
|
curl failed to verify the legitimacy of the server and therefore could not
|
||||||
|
establish a secure connection to it. To learn more about this situation and
|
||||||
|
how to fix it, please visit the web page mentioned above.
|
||||||
|
```
|
||||||
|
|
||||||
|
In my homelab, I am indeed using self-signed certificates. I also encountered the same issue in my lab at work, though, and I'm using certs issued by our enterprise CA there. I had run into a similar problem with previous versions of SSC, but the [quick-and-dirty workaround to disable certificate verification](https://communities.vmware.com/t5/VMware-vRealize-Discussions/SaltStack-Config-Integration-show-Blank-Page/td-p/2863973) doesn't seem to work anymore.
|
||||||
|
|
||||||
|
### The Solution
|
||||||
|
Clearly I needed to import either the vRA system's certificate (for my homelab) or the certificate chain for my enterprise CA (for my work environment) into SSC's certificate store so that it will trust vRA. But how?
|
||||||
|
|
||||||
|
I fumbled around for a bit and managed to get the required certs added to the system certificate store so that my `curl` test would succeed, but trying to access the SSC web UI still gave me a big middle finger. I eventually found [this documentation](https://docs.vmware.com/en/VMware-vRealize-Automation-SaltStack-Config/8.6/install-configure-saltstack-config/GUID-21A87CE2-8184-4F41-B71B-0FCBB93F21FC.html#troubleshooting-saltstack-config-environments-with-vrealize-automation-that-use-selfsigned-certificates-3) which describes how to configure SSC to work with self-signed certs, and it held the missing detail of how to tell the SaltStack Returner-as-a-Service (RaaS) component that it should use that system certificate store.
|
||||||
|
|
||||||
|
So here's what I did to get things working in my homelab:
|
||||||
|
1. Point a browser to my vRA instance, click on the certificate error to view the certificate details, and then export the _CA_ certificate to a local file. (For a self-signed cert issued by LCM, this will likely be called something like `Automatically generated one-off CA authority for vRA`.)
|
||||||
|
![Exporting the self-signed CA cert](20211105_export_selfsigned_ca.png)
|
||||||
|
2. Open the file in a text editor, and copy the contents into a new file on the SSC appliance. I used `~/vra.crt`.
|
||||||
|
3. Append the certificate to the end of the system `ca-bundle.crt`:
|
||||||
|
```sh
|
||||||
|
cat <vra.crt >> /etc/pki/tls/certs/ca-bundle.crt
|
||||||
|
```
|
||||||
|
4. Test that I can now `curl` from vRA without a certificate error:
|
||||||
|
```sh
|
||||||
|
root@ssc [ ~ ]# curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery
|
||||||
|
{"timestamp":1636139143260,"type":"CLIENT_ERROR","status":"400 BAD_REQUEST","error":"Bad Request","serverMessage":"400 BAD_REQUEST \"Required String parameter 'state' is not present\""}
|
||||||
|
```
|
||||||
|
5. Edit `/usr/lib/systemd/system/raas.service` to update the service definition so it will look to the `ca-bundle.crt` file by adding
|
||||||
|
```
|
||||||
|
Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
|
||||||
|
```
|
||||||
|
above the `ExecStart` line:
|
||||||
|
```sh
|
||||||
|
root@ssc [ ~ ]# cat /usr/lib/systemd/system/raas.service
|
||||||
|
[Unit]
|
||||||
|
Description=The SaltStack Enterprise API Server
|
||||||
|
After=network.target
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User=raas
|
||||||
|
Group=raas
|
||||||
|
# to be able to bind port < 1024
|
||||||
|
AmbientCapabilities=CAP_NET_BIND_SERVICE
|
||||||
|
NoNewPrivileges=yes
|
||||||
|
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX AF_NETLINK
|
||||||
|
PermissionsStartOnly=true
|
||||||
|
ExecStartPre=/bin/sh -c 'systemctl set-environment FIPS_MODE=$(/opt/vmware/bin/ovfenv -q --key fips-mode)'
|
||||||
|
ExecStartPre=/bin/sh -c 'systemctl set-environment NODE_TYPE=$(/opt/vmware/bin/ovfenv -q --key node-type)'
|
||||||
|
Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt
|
||||||
|
ExecStart=/usr/bin/raas
|
||||||
|
TimeoutStopSec=90
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
|
```
|
||||||
|
6. Stop and restart the `raas` service:
|
||||||
|
```sh
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl stop raas
|
||||||
|
systemctl start raas
|
||||||
|
```
|
||||||
|
7. And then try to visit the SSC URL again. This time, it redirects successfully to vIDM:
|
||||||
|
![Successful vIDM redirect](20211105_vidm_login.png)
|
||||||
|
8. Log in and get salty:
|
||||||
|
![Get salty!](20211105_get_salty.png)
|
||||||
|
|
||||||
|
The steps for doing this at work with an enterprise CA were pretty similar, with just slightly-different steps 1 and 2:
|
||||||
|
1. Access the enterprise CA and download the CA chain, which came in `.p7b` format.
|
||||||
|
2. Use `openssl` to extract the individual certificates:
|
||||||
|
```sh
|
||||||
|
openssl pkcs7 -inform PEM -outform PEM -in enterprise-ca-chain.p7b -print_certs > enterprise-ca-chain.pem
|
||||||
|
```
|
||||||
|
Copy it to the SSC appliance, and then pick up with Step 3 above.
|
||||||
|
|
||||||
|
I'm eager to dive deeper with SSC and figure out how best to leverage it with vRA. I'll let you know if/when I figure out any cool tricks!
|
||||||
|
|
||||||
|
In the meantime, maybe my struggles today can help you get past similar hurdles in your SSC deployments.
|
BIN
content/post/getting-started-vra-rest-api/add_rest_host_auth.png
Normal file
After Width: | Height: | Size: 21 KiB |
After Width: | Height: | Size: 79 KiB |
BIN
content/post/getting-started-vra-rest-api/authorize_1.png
Normal file
After Width: | Height: | Size: 109 KiB |