mirror of
https://github.com/jbowdre/virtuallypotato.git
synced 2024-11-25 08:22:19 +00:00
fix broken image links
This commit is contained in:
parent
a01b47be35
commit
2c296347be
8 changed files with 107 additions and 107 deletions
|
@ -39,4 +39,4 @@ Putting Netlify in front of the repositories where my site content is stored als
|
|||
|
||||
**Anyway, here we are: the new Virtually Potato, powered by Hugo and Netlify!**
|
||||
|
||||
![Woohoo!](/celebration.gif)
|
||||
![Woohoo!](celebration.gif)
|
|
@ -323,9 +323,9 @@ I'll do that with another scriptable task element, named `Apply new names`, whic
|
|||
|
||||
{{% notice info "Binding a workflow output" %}}
|
||||
To easily create a new workflow output and bind it to a task's output, click the task's **Add New** option like usual:
|
||||
![](/add_new.png)
|
||||
![](add_new.png)
|
||||
Select **Output** at the top of the *New Variable* dialog and the complete the form with the other required details:
|
||||
![](/new_output_parameter.png)
|
||||
![](new_output_parameter.png)
|
||||
{{% /notice %}}
|
||||
|
||||
|
||||
|
|
|
@ -78,20 +78,20 @@ It's not pretty, but it'll do the trick. All that's left is to export this data
|
|||
```powershell
|
||||
get-vdportgroup | select Name, VlanConfiguration, Datacenter, Uid | export-csv -NoTypeInformation ./networks.csv
|
||||
```
|
||||
![My networks.csv export, including the networks which don't match the naming criteria and will be skipped by the import process.](/networks.csv.png)
|
||||
![My networks.csv export, including the networks which don't match the naming criteria and will be skipped by the import process.](networks.csv.png)
|
||||
|
||||
### Setting up phpIPAM
|
||||
After [deploying a fresh phpIPAM instance on my Tanzu Community Edition Kubernetes cluster](/tanzu-community-edition-k8s-homelab/#a-real-workload---phpipam), there are a few additional steps needed to enable API access. To start, I log in to my phpIPAM instance and navigate to the **Administration > Server Management > phpIPAM Settings** page, where I enabled both the *Prettify links* and *API* feature settings - making sure to hit the **Save** button at the bottom of the page once I do so.
|
||||
![Enabling the API](/server_settings.png)
|
||||
![Enabling the API](server_settings.png)
|
||||
|
||||
Then I need to head to the **User Management** page to create a new user that will be used to authenticate against the API:
|
||||
![New user creation](/new_user.png)
|
||||
![New user creation](new_user.png)
|
||||
|
||||
And finally, I head to the **API** section to create a new API key with Read/Write permissions:
|
||||
![API key creation](/api_user.png)
|
||||
![API key creation](api_user.png)
|
||||
|
||||
I'm also going to head in to **Administration > IP Related Management > Sections** and delete the default sample sections so that the inventory will be nice and empty:
|
||||
![We don't need no stinkin' sections!](/empty_sections.png)
|
||||
![We don't need no stinkin' sections!](empty_sections.png)
|
||||
|
||||
### Script time
|
||||
Well that's enough prep work; now it's time for the Python3 [script](https://github.com/jbowdre/misc-scripts/blob/main/Python/phpipam-bulk-import.py):
|
||||
|
@ -560,13 +560,13 @@ Authenticating to https://ipam-k8s.lab.bowdre.net/api/api-user...
|
|||
```
|
||||
|
||||
Success! Now I can log in to my phpIPAM instance and check out my newly-imported subnets:
|
||||
![New subnets!](/created_subnets.png)
|
||||
![New subnets!](created_subnets.png)
|
||||
|
||||
Even the one with the weird name formatting was parsed and imported correctly:
|
||||
![Subnet details](/subnet_detail.png)
|
||||
![Subnet details](subnet_detail.png)
|
||||
|
||||
So now phpIPAM knows about the vSphere networks I care about, and it can keep track of which vLAN and nameservers go with which networks. Great! But it still isn't scanning or monitoring those networks, even though I told the script that I wanted to use a remote scan agent. And I can check in the **Administration > Server management > Scan agents** section of the phpIPAM interface to see my newly-created agent configuration.
|
||||
![New agent config](/agent_config.png)
|
||||
![New agent config](agent_config.png)
|
||||
|
||||
... but I haven't actually *deployed* an agent yet. I'll do that by following the same basic steps [described here](/tanzu-community-edition-k8s-homelab/#phpipam-agent) to spin up my `phpipam-agent` on Kubernetes, and I'll plug in that automagically-generated code for the `IPAM_AGENT_KEY` environment variable:
|
||||
|
||||
|
@ -613,7 +613,7 @@ spec:
|
|||
```
|
||||
|
||||
I kick it off with a `kubectl apply` command and check back a few minutes later (after the 15-minute interval defined in the above YAML) to see that it worked, the remote agent scanned like it was supposed to and is reporting IP status back to the phpIPAM database server:
|
||||
![Newly-discovered IPs](/discovered_ips.png)
|
||||
![Newly-discovered IPs](discovered_ips.png)
|
||||
|
||||
I think I've got some more tweaks to do with this environment (why isn't phpIPAM resolving hostnames despite the correct DNS servers getting configured?) but this at least demonstrates a successful proof-of-concept import thanks to my Python script. Sure, I only imported 10 networks here, but I feel like I'm ready to process the several hundred which are available in our production environment now.
|
||||
|
||||
|
|
|
@ -77,7 +77,7 @@ zip QUARTZ64_EFI.img.zip QUARTZ64_EFI.img
|
|||
|
||||
I can then write it to the micro SD card by opening CRU, clicking on the gear icon, and selecting the *Use local image* option.
|
||||
|
||||
![Writing the firmware image](/writing_firmware.png)
|
||||
![Writing the firmware image](writing_firmware.png)
|
||||
|
||||
#### ESXi installation media
|
||||
I'll also need to prepare the ESXi installation media (download [here](https://customerconnect.vmware.com/downloads/get-download?downloadGroup=ESXI-ARM)). For that, I'll be using a 256GB USB drive. Due to the limited storage options on the Quartz64, I'll be installing ESXi onto the same drive I use to boot the installer so, in this case, the more storage the better. By default, ESXi 7.0 will consume up to 128GB for the new `ESX-OSData` partition; whatever is leftover will be made available as a VMFS datastore. That could be problematic given the unavailable/flaky USB support of the Quartz64. (While you *can* install ESXi onto a smaller drive, down to about ~20GB, the lack of additional storage on this hardware makes it pretty important to take advantage of as much space as you can.)
|
||||
|
@ -88,7 +88,7 @@ mv VMware-VMvisor-Installer-7.0.0-19546333.aarch64.iso{,.bin}
|
|||
```
|
||||
|
||||
Then it's time to write the image onto the USB drive:
|
||||
![Writing the ESXi installer image](/writing_esxi.png)
|
||||
![Writing the ESXi installer image](writing_esxi.png)
|
||||
|
||||
|
||||
#### Console connection
|
||||
|
@ -101,7 +101,7 @@ I'll need to use the Quartz64 serial console interface and ["Woodpecker" edition
|
|||
| 10 | `TXD` | Orange |
|
||||
|
||||
I leave the yellow wire dangling free on both ends since I don't need a `+V` connection for the console to work.
|
||||
![Console connection](/console_connection.jpg)
|
||||
![Console connection](console_connection.jpg)
|
||||
|
||||
To verify that I've got things working, I go ahead and pop the micro SD card containing the firmware into its slot on the bottom side of the Quartz64 board, connect the USB console adapter to my Chromebook, and open the [Beagle Term](https://chrome.google.com/webstore/detail/beagle-term/gkdofhllgfohlddimiiildbgoggdpoea) app to set up the serial connection.
|
||||
|
||||
|
@ -119,7 +119,7 @@ I'll need to use these settings for the connection (which are the defaults selec
|
|||
![Beagle Term settings](beagle_term_settings.png)
|
||||
|
||||
I hit **Connect** and then connect the Quartz64's power supply. I watch as it loads the firmware and then launches the BIOS menu:
|
||||
![BIOS menu](/bios.png)
|
||||
![BIOS menu](bios.png)
|
||||
|
||||
### Host creation
|
||||
#### ESXi install
|
||||
|
@ -130,52 +130,52 @@ On that note, remember what I mentioned earlier about how the ESXi installer wou
|
|||
I hooked up a monitor to the board's HDMI port and a USB keyboard to a free port on the hub and verified that the keyboard let me maneuver through the BIOS menu. From here, I hit the **Reset** button on the Quartz64 to restart it and let it boot from the connected USB drive. When I got to the ESXi pre-boot countdown screen, I pressed `[Shift] + O` as instructed and added `autoPartitionOSDataSize=8192` to the boot options. This limits the size of the new-for-ESXi7 ESX-OSData VMFS-L volume to 8GB and will give me much more space for the local datastore.
|
||||
|
||||
Beyond that it's a fairly typical ESXi install process:
|
||||
![Hi, welcome to the ESXi for ARM installer. I'll be your UI this evening.](/esxi_install_1.png)
|
||||
![Just to be sure, I'm going to clobber everything on this USB drive.](/esxi_install_2.png)
|
||||
![Hold on to your butts, here we go!](/esxi_install_3.png)
|
||||
![Whew, we made it!](/esxi_install_4.png)
|
||||
![Hi, welcome to the ESXi for ARM installer. I'll be your UI this evening.](esxi_install_1.png)
|
||||
![Just to be sure, I'm going to clobber everything on this USB drive.](esxi_install_2.png)
|
||||
![Hold on to your butts, here we go!](esxi_install_3.png)
|
||||
![Whew, we made it!](esxi_install_4.png)
|
||||
|
||||
#### Initial configuration
|
||||
After the installation completed, I rebooted the host and watched for the Direct Console User Interface (DCUI) to come up:
|
||||
![ESXi DCUI](/dcui.png)
|
||||
![ESXi DCUI](dcui.png)
|
||||
|
||||
I hit `[F2]` and logged in with the root credentials to get to the System Customization menu:
|
||||
![DCUI System Customization](/dcui_system_customization.png)
|
||||
![DCUI System Customization](dcui_system_customization.png)
|
||||
|
||||
The host automatically received an IP issued by DHCP but I'd like for it to instead use a static IP. I'll also go ahead and configure the appropriate DNS settings.
|
||||
![Setting the IP address](/dcui_ip_address.png)
|
||||
![Configuring DNS settings](/dcui_dns.png)
|
||||
![Setting the IP address](dcui_ip_address.png)
|
||||
![Configuring DNS settings](dcui_dns.png)
|
||||
|
||||
I also create the appropriate matching `A` and `PTR` records in my local DNS, and (after bouncing the management network) I can access the ESXi Embedded Host Client at `https://quartzhost.lab.bowdre.net`:
|
||||
![ESXi Embedded Host Client login screen](/embedded_host_client_login.png)
|
||||
![Summary view of my new host!](/embedded_host_client_summary.png)
|
||||
![ESXi Embedded Host Client login screen](embedded_host_client_login.png)
|
||||
![Summary view of my new host!](embedded_host_client_summary.png)
|
||||
|
||||
That's looking pretty good... but what's up with that date and time? Time has kind of lost all meaning in the last couple of years but I'm *reasonably* certain that January 1, 2001 was at least a few years ago. And I know from past experience that incorrect host time will prevent it from being successfully imported to a vCenter inventory.
|
||||
|
||||
Let's clear that up by enabling the Network Time Protocol (NTP) service on this host. I'll do that by going to **Manage > System > Time & Date** and clicking the **Edit NTP Settings** button. I don't run a local NTP server so I'll point it at `pool.ntp.org` and set the service to start and stop with the host:
|
||||
![NTP configuration](/ntp_configuration.png)
|
||||
![NTP configuration](ntp_configuration.png)
|
||||
|
||||
Now I hop over to the **Services** tab, select the `ntpd` service, and then click the **Start** button there. Once it's running, I then *restart* `ntpd` to help encourage the system to update the time immediately.
|
||||
![Starting the NTP service](/services.png)
|
||||
![Starting the NTP service](services.png)
|
||||
|
||||
Once the service is started I can go back to **Manage > System > Time & Date**, click the **Refresh** button, and confirm that the host has been updated with the correct time:
|
||||
![Correct time!](/correct_time.png)
|
||||
![Correct time!](correct_time.png)
|
||||
|
||||
With the time sorted, I'm just about ready to join this host to my vCenter, but first I'd like to take a look at the storage situation - after all, I did jump through those hoops with the installer to make sure that I would wind up with a useful local datastore. Upon going to **Storage > More storage > Devices** and clicking on the single listed storage device, I can see in the Partition Diagram that the ESX-OSData VMFS-L volume was indeed limited to 8GB, and the free space beyond that was automatically formatted as a VMFS datastore:
|
||||
![Reviewing the partition diagram](/storage_device.png)
|
||||
![Reviewing the partition diagram](storage_device.png)
|
||||
|
||||
And I can also take a peek at that local datastore:
|
||||
![Local datastore](/storage_datastore.png)
|
||||
![Local datastore](storage_datastore.png)
|
||||
|
||||
With 200+ gigabytes of free space on the datastore I should have ample room for a few lightweight VMs.
|
||||
|
||||
#### Adding to vCenter
|
||||
Alright, let's go ahead and bring the new host into my vCenter environment. That starts off just like any other host, by right-clicking an inventory location in the *Hosts & Clusters* view and selecting **Add Host**.
|
||||
![Starting the process](/add_host.png)
|
||||
![Starting the process](add_host.png)
|
||||
|
||||
![Reviewing the host details](/add_host_confirm.png)
|
||||
![Reviewing the host details](add_host_confirm.png)
|
||||
|
||||
![Successfully added to the vCenter](/host_added.png)
|
||||
![Successfully added to the vCenter](host_added.png)
|
||||
|
||||
Success! I've now got a single-board hypervisor connected to my vCenter. Now let's give that host a workload.[^workloads]
|
||||
|
||||
|
@ -196,16 +196,16 @@ Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
|
|||
```
|
||||
|
||||
That will let me deploy from a resource already inside my lab network instead of transferring the OVA from my laptop. So now I can go back to my vSphere Client and go through the steps to **Deploy OVF Template** to the new host, and I'll plug in the URL `http://deb01.lab.bowdre.net:8000/photon_uefi.ova`:
|
||||
![Deploying a template from URL](/deploy_from_url.png)
|
||||
![Deploying a template from URL](deploy_from_url.png)
|
||||
|
||||
I'll name it `pho01` and drop it in an appropriate VM folder:
|
||||
![Naming the new VM](/name_vm.png)
|
||||
![Naming the new VM](name_vm.png)
|
||||
|
||||
And place it on the new Quartz64 host:
|
||||
![Host placement](/vm_placement.png)
|
||||
![Host placement](vm_placement.png)
|
||||
|
||||
The rest of the OVF deployment is basically just selecting the default options and clicking through to finish it. And then once it's deployed, I'll go ahead and power on the new VM.
|
||||
![The newly-created Photon VM](/new_vm.png)
|
||||
![The newly-created Photon VM](new_vm.png)
|
||||
|
||||
#### Configuring Photon
|
||||
There are just a few things I'll want to configure on this VM before I move on to installing Tailscale, and I'll start out simply by logging in with the remote console.
|
||||
|
@ -214,7 +214,7 @@ There are just a few things I'll want to configure on this VM before I move on t
|
|||
The default password for Photon's `root` user is `changeme`. You'll be forced to change that at first login.
|
||||
{{% /notice %}}
|
||||
|
||||
![First login, and the requisite password change](/first_login.png)
|
||||
![First login, and the requisite password change](first_login.png)
|
||||
|
||||
Now that I'm in, I'll set the hostname appropriately:
|
||||
```bash
|
||||
|
@ -376,17 +376,17 @@ sudo tailscale up --advertise-tags "tag:home" --advertise-route "192.168.1.0/24"
|
|||
```
|
||||
|
||||
That will return a URL I can use to authenticate, and I'll then able to to view and manage the new Tailscale node from the `login.tailscale.com` admin portal:
|
||||
![Success!](/new_tailscale_node.png)
|
||||
![Success!](new_tailscale_node.png)
|
||||
|
||||
You might remember [from last time](/secure-networking-made-simple-with-tailscale/#subnets-and-exit-nodes) that the "Subnets (!)" label indicates that this node is attempting to advertise a subnet route but that route hasn't yet been accepted through the admin portal. You may also remember that the `192.168.1.0/24` subnet is already being advertised by my `vyos` node:[^hassos]
|
||||
![Actively-routed subnets show up black, while advertised-but-not-currently-routed subnets appear grey](/advertised_subnets.png)
|
||||
![Actively-routed subnets show up black, while advertised-but-not-currently-routed subnets appear grey](advertised_subnets.png)
|
||||
|
||||
Things could potentially get messy if I have two nodes advertising routes for the same subnet[^failover] so I'm going to use the admin portal to disable that route on `vyos` before enabling it for `pho01`. I'll let `vyos` continue to route the `172.16.0.0/16` subnet (which only exists inside the NUC's vSphere environment after all) and it can continue to function as an Exit Node as well.
|
||||
![Disabling the subnet on vyos](/disabling_subnet_on_vyos.png)
|
||||
![Disabling the subnet on vyos](disabling_subnet_on_vyos.png)
|
||||
|
||||
![Enabling the subnet on pho01](/enabling_subnet_on_pho01.png)
|
||||
![Enabling the subnet on pho01](enabling_subnet_on_pho01.png)
|
||||
|
||||
![Updated subnets](/updated_subnets.png)
|
||||
![Updated subnets](updated_subnets.png)
|
||||
|
||||
Now I can remotely access the VM (and thus my homelab!) from any of my other Tailscale-enrolled devices!
|
||||
|
||||
|
|
|
@ -32,14 +32,14 @@ The TCE team has created a [rather detailed guide](https://tanzucommunityedition
|
|||
[^packages]: Per VMware, "Pinniped provides the authentication service, which uses Dex to connect to identity providers such as Active Directory."
|
||||
### Prequisite
|
||||
In order to put the "Secure" in LDAPS, I need to make sure my Active Directory domain controller is configured for that, and that means also creating a Certificate Authority for issuing certificates. I followed the steps [here](http://vcloud-lab.com/entries/windows-2016-server-r2/configuring-secure-ldaps-on-domain-controller) to get this set up in my homelab. I can then point my browser to `http://win01.lab.bowdre.net/certsrv/certcarc.asp` to download the base64-encoded CA certificate since I'll need that later.
|
||||
![Downloading the CA cert](/download_ca_cert.png)
|
||||
![Downloading the CA cert](download_ca_cert.png)
|
||||
|
||||
With that sorted, I'm ready to move on to creating a new TCE cluster with an LDAPS identity provider configured.
|
||||
### Cluster creation
|
||||
The [cluster deployment steps](/tanzu-community-edition-k8s-homelab/#management-cluster) are very similar to what I did last time so I won't repeat all those instructions here. The only difference is that this time I don't skip past the Identity Management screen; instead, I'll select the LDAPS radio button and get ready to fill out the form.
|
||||
|
||||
#### Identity management configuration
|
||||
![Identity Management section](/identity_management_1.png)
|
||||
![Identity Management section](identity_management_1.png)
|
||||
|
||||
**LDAPS Identity Management Source**
|
||||
| Field | Value | Notes |
|
||||
|
@ -66,13 +66,13 @@ The [cluster deployment steps](/tanzu-community-edition-k8s-homelab/#management-
|
|||
|
||||
And I'll copy the contents of the base64-encoded CA certificate I downloaded earlier and paste them into the Root CA Certificate field.
|
||||
|
||||
![Completed Identity Management section](/identity_management_2.png)
|
||||
![Completed Identity Management section](identity_management_2.png)
|
||||
|
||||
Before moving on, I can use the **Verify LDAP Configuration** button to quickly confirm that the connection is set up correctly. (I discovered that it doesn't honor the attribute selections made on the previous screen so I have to search for my Common Name (`John Bowdre`) instead of my username (`john`), but this at least lets me verify that the connection and certificate are working correctly.)
|
||||
![LDAPS test](/ldaps_test.png)
|
||||
![LDAPS test](ldaps_test.png)
|
||||
|
||||
I can then click through the rest of the wizard but (as before) I'll stop on the final review page. Despite entering everything correctly in the wizard I'll actually need to make a small edit to the deployment configuration YAML so I make a note of its location and copy it to a file called `tce-mgmt-deploy.yaml` in my working directory so that I can take a look.
|
||||
![Reviewing the cluster configuration file](/review_cluster_configuration.png)
|
||||
![Reviewing the cluster configuration file](review_cluster_configuration.png)
|
||||
|
||||
#### Editing the cluster spec
|
||||
Remember that awkward `member:1.2.840.113556.1.4.1941:` attribute from earlier? Here's how it looks within the TCE cluster-defining YAML:
|
||||
|
@ -119,7 +119,7 @@ tanzu management-cluster create tce-mgmt -f tce-mgmt-deploy.yaml
|
|||
|
||||
This will probably take 10-15 minutes to deploy so it's a great time to go top off my coffee.
|
||||
|
||||
![Coffee break!](/coffee.gif)
|
||||
![Coffee break!](coffee.gif)
|
||||
|
||||
|
||||
And we're back - and with a friendly success message in the console:
|
||||
|
@ -288,10 +288,10 @@ After assuming the non-admin context, the next time I try to interact with the c
|
|||
```
|
||||
|
||||
But it will shortly spawn a browser page prompting me to log in:
|
||||
![Dex login prompt](/dex_login_prompt.png)
|
||||
![Dex login prompt](dex_login_prompt.png)
|
||||
|
||||
Doing so successfully will yield:
|
||||
![Dex login success!](/dex_login_success.png)
|
||||
![Dex login success!](dex_login_success.png)
|
||||
|
||||
And the `kubectl` command will return the expected details:
|
||||
```bash
|
||||
|
@ -301,7 +301,7 @@ tce-mgmt-control-plane-v8l8r Ready control-plane,master 29h v1.21.5+vm
|
|||
tce-mgmt-md-0-847db9ddc-5bwjs Ready <none> 28h v1.21.5+vmware.1
|
||||
```
|
||||
|
||||
![It's working!!! Holy crap, I can't believe it.](/its-working.gif)
|
||||
![It's working!!! Holy crap, I can't believe it.](its-working.gif)
|
||||
|
||||
So I've now successfully logged in to the management cluster as a non-admin user with my Active Directory credentials. Excellent!
|
||||
|
||||
|
@ -363,7 +363,7 @@ tce-work-md-0-bcfdc4d79-vn9xb Ready <none> 11m v1.21.5+vm
|
|||
```
|
||||
|
||||
Now I can *Do Work*!
|
||||
![Back to the grind](/cat-working.gif)
|
||||
![Back to the grind](cat-working.gif)
|
||||
|
||||
{{% notice note "Create DHCP reservations for control plane nodes" %}}
|
||||
VMware [points out](https://tanzucommunityedition.io/docs/latest/verify-deployment/#configure-dhcp-reservations-for-the-control-plane-nodes-vsphere-only) that it's important to create DHCP reservations for the IP addresses which were dynamically assigned to the control plane nodes in both the management and workload clusters so be sure to take care of that before getting too involved in "Work".
|
||||
|
|
|
@ -109,36 +109,36 @@ nessus LoadBalancer 100.67.16.51 192.168.1.79 443:31260/TCP 57s
|
|||
```
|
||||
|
||||
I point my browser to `https://192.168.1.79` and see that it's a great time for a quick coffee break since it will take a few minutes for Nessus to initialize itself:
|
||||
![Nessus Initialization](/nessus_init.png)
|
||||
![Nessus Initialization](nessus_init.png)
|
||||
|
||||
Eventually that gets replaced with a login screen, where I can authenticate using the username and password specified earlier in the YAML.
|
||||
![Nessus login screen](/nessus_login.png)
|
||||
![Nessus login screen](nessus_login.png)
|
||||
|
||||
After logging in, I get prompted to run a discovery scan to identify hosts on the network. There's a note that hosts revealed by the discovery scan will *not* count against my 16-host limit unless/until I select individual hosts for more detailed scans. That's good to know for future efforts, but for now I'm focused on just scanning my one vCenter server so I dismiss the prompt.
|
||||
|
||||
What I *am* interested in is scanning my vCenter for the Log4Shell vulnerability so I'll hit the friendly blue **New Scan** button at the top of the *Scans* page to create my scan. That shows me a list of *Scan Templates*:
|
||||
![Scan templates](/scan_templates.png)
|
||||
![Scan templates](scan_templates.png)
|
||||
|
||||
I'll scroll down a bit and pick the first *Log4Shell* template:
|
||||
![Log4Shell templates](/log4shell_templates.png)
|
||||
![Log4Shell templates](log4shell_templates.png)
|
||||
|
||||
I plug in a name for the scan and enter my vCenter IP (`192.168.1.12`) as the lone scan target:
|
||||
![Naming the scan and selecting the target](/scan_setup_page_1.png)
|
||||
![Naming the scan and selecting the target](scan_setup_page_1.png)
|
||||
|
||||
There's a note there that I'll also need to include credentials so that the Nessus scanner can log in to the target in order to conduct the scan, so I pop over to the aptly-named *Credentials* tab to add some SSH credentials. This is just my lab environment so I'll give it the `root` credentials, but if I were running Nessus in a real environment I'd probably want to use a dedicated user account just for scans.
|
||||
![Giving credentials for the scan](/scan_setup_page2.png)
|
||||
![Giving credentials for the scan](scan_setup_page2.png)
|
||||
|
||||
Now I can scroll to the bottom of the page, click the down-arrow next to the *Save* button and select the **Launch** option to kick off the scan:
|
||||
![Go for launch](/launch.png)
|
||||
![Go for launch](launch.png)
|
||||
|
||||
That drops me back to the *My Scans* view where I can see the status of my scan. I'll grab another coffee while I stare at the little green spinny thing.
|
||||
![My scans](/my_scans.gif)
|
||||
![My scans](my_scans.gif)
|
||||
|
||||
Okay, break's over - and so is the scan! Now I can click on the name of the scan to view the results:
|
||||
![Results summary](/scan_results_summary.png)
|
||||
![Results summary](scan_results_summary.png)
|
||||
|
||||
And I can drill down into the vulnerability details:
|
||||
![Log4j-related vulnerabilities](/scan_results_log4j.png)
|
||||
![Log4j-related vulnerabilities](scan_results_log4j.png)
|
||||
|
||||
This reveals a handful of findings related to old 1.x versions of Log4j (which went EOL in 2015 - yikes!) as well as [CVE-2021-44832](https://nvd.nist.gov/vuln/detail/CVE-2021-44832) Remote Code Execution vulnerability (which is resolved in Log4j 2.17.1), but the inclusion of Log4j 2.17.0 in vCenter 7.0U3c *was* sufficient to close the highly-publicized [CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228) Log4Shell vulnerability. Hopefully VMware can get these other Log4j vulnerabilities taken care of in another upcoming vCenter release.
|
||||
|
||||
|
|
|
@ -184,16 +184,16 @@ Report:
|
|||
|
||||
### Tailscale management
|
||||
Now that the Tailscale client is installed on my devices and I've verified that they can talk to each other, it might be a good time to *log in* at [`login.tailscale.com`](https://login.tailscale.com/) to take a look at the Tailscale admin console.
|
||||
![Tailscale admin console](/admin_console.png)
|
||||
![Tailscale admin console](admin_console.png)
|
||||
|
||||
#### Subnets and Exit Nodes
|
||||
See how the `vyos` node has little labels on it about "Subnets (!)" and "Exit Node (!)"? The exclamation marks are there because the node is *advertising* subnets and its exit node eligibility, but those haven't actually been turned on it. To enable the `vyos` node to function as a subnet router (for the `172.16.0.0/16` and `192.168.1.0/24` networks listed beneath its Tailscale IP) and as an exit node (for internet-bound traffic from other Tailscale nodes), I need to click on the little three-dot menu icon at the right edge of the row and select the "Edit route settings..." option.
|
||||
![The menu contains some other useful options too - we'll get to those!](/edit_menu.png)
|
||||
![The menu contains some other useful options too - we'll get to those!](edit_menu.png)
|
||||
|
||||
![Edit route settings](/route_settings.png)
|
||||
![Edit route settings](route_settings.png)
|
||||
|
||||
Now I can approve the subnet routes (individually or simultaneously and at the same time) and allow the node to route traffic to the internet as well[^exit_node].
|
||||
![Enabled the routes](/enabled_routes.png)
|
||||
![Enabled the routes](enabled_routes.png)
|
||||
|
||||
Cool! But now that's giving me another warning...
|
||||
|
||||
|
@ -202,26 +202,26 @@ Cool! But now that's giving me another warning...
|
|||
#### Key expiry
|
||||
By default, Tailscale [expires each node's encryption keys every 180 days](https://tailscale.com/kb/1028/key-expiry/). This improves security (particularly over vanilla WireGuard, which doesn't require any key rotation) but each node will need to reauthenticate (via `tailscale up`) in order to get a new key. It may not make sense to do that for systems acting as subnet routers or exit nodes since they would stop passing all Tailscale traffic once the key expires. That would also hurt for my cloud servers which are *only* accessible via Tailscale; if I can't log in through SSH (since it's blocked at the firewall) then I can't reauthenticate Tailscale to regain access. For those systems, I can click that three-dot menu again and select the "Disable key expiry" option. I tend to do this for my "always on" tailnet members and just enforce the key expiry for my "client" type devices which could potentially be physically lost or stolen.
|
||||
|
||||
![Machine list showing enabled Subnet Router and Exit Node and disabled Key Expiry](/no_expiry.png)
|
||||
![Machine list showing enabled Subnet Router and Exit Node and disabled Key Expiry](no_expiry.png)
|
||||
|
||||
#### Configuring DNS
|
||||
It's great that all my Tailscale machines can talk to each other directly by their respective Tailscale IP addresses, but who wants to keep up with IPs? I sure don't. Let's do some DNS. I'll start out by clicking on the [DNS](https://login.tailscale.com/admin/dns) tab in the admin console.
|
||||
![The DNS options](/dns_tab.png)
|
||||
![The DNS options](dns_tab.png)
|
||||
|
||||
I need to add a Global Nameserver before I can enable MagicDNS so I'll click on the appropriate button to enter in the *Tailscale IP*[^dns_ip] of my home DNS server (which is using [NextDNS](https://nextdns.io/) as the upstream resolver).
|
||||
![Adding a global name server](/add_global_ns.png)
|
||||
![Adding a global name server](add_global_ns.png)
|
||||
|
||||
I'll also enable the toggle to "Override local DNS" to make sure all queries from connected clients are going through this server (and thus extend the NextDNS protection to all clients without having to configure them individually).
|
||||
![Overriding local DNS configuration](/override_local_dns.png)
|
||||
![Overriding local DNS configuration](override_local_dns.png)
|
||||
|
||||
I can also define search domains to be used for unqualified DNS queries by adding another name server with the same IP address, enabling the "Restrict to search domain" option, and entering the desired domain:
|
||||
![Entering a search domain](/restrict_search_domain.png)
|
||||
![Entering a search domain](restrict_search_domain.png)
|
||||
|
||||
This will let me resolve hostnames when connected remotely to my lab without having to type the domain suffix (ex, `vcsa` versus `vcsa.lab.bowdre.net`).
|
||||
|
||||
And, finally, I can click the "Enable MagicDNS" button to turn on the magic. This adds a new nameserver with a private Tailscale IP which will resolve Tailscale hostnames to their internal IP addresses.
|
||||
|
||||
![MagicDNS Enabled!](/magicdns.png)
|
||||
![MagicDNS Enabled!](magicdns.png)
|
||||
|
||||
|
||||
Now I can log in to my Matrix server by simply typing `ssh matrix`. Woohoo!
|
||||
|
@ -257,13 +257,13 @@ I'm going to start by creating a group called `admins` and add myself to that gr
|
|||
```
|
||||
|
||||
Now I have two options for applying tags to devices. I can either do it from the admin console, or by passing the `--advertise-tags` flag to the `tailscale up` CLI command. I touched on the CLI approach earlier so I'll go with the GUI approach this time. It's simple - I just go back to the [Machines](https://login.tailscale.com/admin/machines) tab, click on the three-dot menu button for a machine, and select the "Edit ACL tags..." option.
|
||||
![Edit ACL tags](/acl_menu.png)
|
||||
![Edit ACL tags](acl_menu.png)
|
||||
|
||||
I can then pick the tag (or tags!) I want to apply:
|
||||
![Selecting the tags](/selecting_tags.png)
|
||||
![Selecting the tags](selecting_tags.png)
|
||||
|
||||
The applied tags have now replaced the owner information which was previously associated with each machine:
|
||||
![Tagged machines](/tagged_machines.png)
|
||||
![Tagged machines](tagged_machines.png)
|
||||
|
||||
#### ACLs
|
||||
By default, Tailscale implements an implicit "Allow All" ACL. As soon as you start modifying the ACL, though, that switches to an implicit "Deny All". So I'll add new rules to explicitly state what communication should be permitted and everything else will be blocked.
|
||||
|
|
|
@ -11,7 +11,7 @@ usePageBundles: true
|
|||
# featureImage: "file.png" # Sets featured image on blog post.
|
||||
# featureImageAlt: 'Description of image' # Alternative text for featured image.
|
||||
# featureImageCap: 'This is the featured image.' # Caption (optional).
|
||||
thumbnail: "/tanzu_community_edition.png" # Sets thumbnail image appearing inside card on homepage.
|
||||
thumbnail: "tanzu_community_edition.png" # Sets thumbnail image appearing inside card on homepage.
|
||||
# shareImage: "share.png" # Designate a separate image for social media sharing.
|
||||
codeLineNumbers: false # Override global value for showing of line numbers within code block.
|
||||
series: Projects
|
||||
|
@ -27,7 +27,7 @@ comment: true # Disable comment if false.
|
|||
---
|
||||
|
||||
Back in October, VMware [announced](https://tanzu.vmware.com/content/blog/vmware-tanzu-community-edition-announcement) [Tanzu Community Edition](https://tanzucommunityedition.io/) as way to provide "a full-featured, easy-to-manage Kubernetes platform that’s perfect for users and learners alike." TCE bundles a bunch of open-source components together in a modular, "batteries included but swappable" way:
|
||||
![Tanzu Community Edition components](/tanzu_community_edition.png)
|
||||
![Tanzu Community Edition components](tanzu_community_edition.png)
|
||||
|
||||
I've been meaning to brush up on my Kubernetes skills so I thought deploying and using TCE in my self-contained [homelab](/vmware-home-lab-on-intel-nuc-9/) would be a fun and rewarding learning exercise - and it was!
|
||||
|
||||
|
@ -59,7 +59,7 @@ Moving on to the [Getting Started](https://tanzucommunityedition.io/docs/latest/
|
|||
I need to download a VMware OVA which can be used for deploying my Kubernetes nodes from the VMWare Customer Connect portal [here](https://customerconnect.vmware.com/downloads/get-download?downloadGroup=TCE-090)[^register]. There are a few different options available. I'll get the Photon release with the highest Kubernetes version currently available, `photon-3-kube-v1.21.2+vmware.1-tkg.2-12816990095845873721.ova`.
|
||||
|
||||
Once the file is downloaded, I'll log into my vCenter and use the **Deploy OVF Template** action to deploy a new VM using the OVA. I won't bother booting the machine once deployed but will rename it to `k8s-node` to make it easier to identify later on and then convert it to a template.
|
||||
![New k8s-node template](/k8s-node_template.png)
|
||||
![New k8s-node template](k8s-node_template.png)
|
||||
|
||||
[^register]: Register [here](https://customerconnect.vmware.com/account-registration) if you don't yet have an account.
|
||||
|
||||
|
@ -156,33 +156,33 @@ Serving kickstart UI at http://[::]:8080
|
|||
```
|
||||
|
||||
*Now* I can point my local browser to my VM and see the UI:
|
||||
![The Tanzu Installer UI](/installer_ui.png)
|
||||
![The Tanzu Installer UI](installer_ui.png)
|
||||
|
||||
And then I can click the button at the bottom left to save my eyes[^dark_mode] before selecting the option to deploy on vSphere.
|
||||
![Configuring the IaaS Provider](/installer_iaas_provider.png)
|
||||
![Configuring the IaaS Provider](installer_iaas_provider.png)
|
||||
|
||||
I'll plug in the FQDN of my vCenter and provide a username and password to use to connect to it, then hit the **Connect** button. That will prompt me to accept the vCenter's certificate thumbprint, and then I'll be able to select the virtual datacenter that I want to use. Finally, I'll paste in the SSH public key[^gen_key] I'll use for interacting with the cluster.
|
||||
|
||||
I click **Next** and move on to the Management Cluster Settings.
|
||||
![Configuring the Management Cluster](/installer_management_cluster.png)
|
||||
![Configuring the Management Cluster](installer_management_cluster.png)
|
||||
|
||||
This is for a lab environment that's fairly memory-constrained, so I'll pick the single-node *Development* setup with a *small* instance type. I'll name the cluster `tce-mgmt` and stick with the default `kube-vip` control plane endpoint provider. I plug in the control plane endpoint IP that I'll use for connecting to the cluster and select the *small* instance type for the worker node type.
|
||||
|
||||
I don't have an NSX Advanced Load Balancer or any Metadata to configure so I'll skip past those steps and move on to configuring the Resources.
|
||||
![Configuring Resources](/installer_resources.png)
|
||||
![Configuring Resources](installer_resources.png)
|
||||
|
||||
Here I pick to place the Tanzu-related resources in a VM folder named `Tanzu`, to store their data on my single host's single datastore, and to deploy to the one-host `physical-cluster` cluster.
|
||||
|
||||
Now for the Kubernetes Networking Settings:
|
||||
![Configuring Kubernetes Networking](/installer_k8s_networking.png)
|
||||
![Configuring Kubernetes Networking](installer_k8s_networking.png)
|
||||
|
||||
This bit is actually pretty easy. For Network Name, I select the vSphere network where the `192.168.1.0/24` network I identified earlier lives, `d-Home-Mgmt`. I leave the service and pod CIDR ranges as default.
|
||||
|
||||
I disable the Identity Management option and then pick the `k8s-node` template I had imported to vSphere earlier.
|
||||
![Configuring the OS Image](/installer_image.png)
|
||||
![Configuring the OS Image](installer_image.png)
|
||||
|
||||
I skip the Tanzu Mission Control piece (since I'm still waiting on access to [TMC Starter](https://tanzu.vmware.com/tmc-starter)) and click the **Review Configuration** button at the bottom of the screen to review my selections.
|
||||
![Reviewing the configuration](/installer_review.png)
|
||||
![Reviewing the configuration](installer_review.png)
|
||||
|
||||
See the option at the bottom to copy the CLI command? I'll need to use that since clicking the friendly **Deploy** button doesn't seem to work while connected to the web server remotely.
|
||||
|
||||
|
@ -222,7 +222,7 @@ Would you like to deploy a non-integrated Tanzu Kubernetes Grid management clust
|
|||
That's not what I'm after in this case, though, so I'll answer with a `n` and a `y` to confirm that I want the non-integrated TKG deployment.
|
||||
|
||||
And now I go get coffee as it'll take 10-15 minutes for the deployment to complete.
|
||||
![Coffee break!](/coffee_break.gif)
|
||||
![Coffee break!](coffee_break.gif)
|
||||
|
||||
Okay, I'm back - and so is my shell prompt! The deployment completed successfully:
|
||||
```
|
||||
|
@ -343,10 +343,10 @@ NAME READY SEVERITY RE
|
|||
```
|
||||
|
||||
I can also go into vCenter and take a look at the VMs which constitute the two clusters:
|
||||
![Cluster VMs](/clusters_in_vsphere.png)
|
||||
![Cluster VMs](clusters_in_vsphere.png)
|
||||
|
||||
I've highlighted the two Control Plane nodes. They got their IP addresses assigned by DHCP, but [VMware says](https://tanzucommunityedition.io/docs/latest/verify-deployment/#configure-dhcp-reservations-for-the-control-plane-nodes-vsphere-only) that I need to create reservations for them to make sure they don't change. So I'll do just that.
|
||||
![DHCP reservations on Google Wifi](/dhcp_reservations.png)
|
||||
![DHCP reservations on Google Wifi](dhcp_reservations.png)
|
||||
|
||||
Excellent, I've got a Tanzu management cluster and a Tanzu workload cluster. What now?
|
||||
|
||||
|
@ -436,7 +436,7 @@ Node: tce-work-md-0-687444b744-cck4x/192.168.1.145
|
|||
```
|
||||
|
||||
So I can point my browser at `http://192.168.1.145:30001` and see the demo:
|
||||
![yelb demo page](/yelb_nodeport_demo.png)
|
||||
![yelb demo page](yelb_nodeport_demo.png)
|
||||
|
||||
After marveling at my own magnificence[^magnificence] for a few minutes, I'm ready to move on to something more interesting - but first, I'll just delete the `yelb` namespace to clean up the work I just did:
|
||||
```bash
|
||||
|
@ -501,7 +501,7 @@ yelb-ui LoadBalancer 100.67.177.185 192.168.1.65 80:32339/TCP 4h35m
|
|||
```
|
||||
|
||||
And it's got an IP! I can point my browser to `http://192.168.1.65` now and see:
|
||||
![Successful LoadBalancer test!](/yelb_loadbalancer_demo.png)
|
||||
![Successful LoadBalancer test!](yelb_loadbalancer_demo.png)
|
||||
|
||||
I'll keep the `kube-vip` load balancer since it'll come in handy, but I have no further use for `yelb`:
|
||||
```bash
|
||||
|
@ -513,10 +513,10 @@ namespace "yelb" deleted
|
|||
At some point, I'm going to want to make sure that data from my Tanzu workloads stick around persistently - and for that, I'll need to [define some storage stuff](https://tanzucommunityedition.io/docs/latest/vsphere-cns/).
|
||||
|
||||
First up, I'll add a new tag called `tkg-storage-local` to the `nuchost-local` vSphere datastore that I want to use for storing Tanzu volumes:
|
||||
![Tag (and corresponding category) applied ](/storage_tag.png)
|
||||
![Tag (and corresponding category) applied ](storage_tag.png)
|
||||
|
||||
Then I create a new vSphere Storage Policy called `tkg-storage-policy` which states that data covered by the policy should be placed on the datastore(s) tagged with `tkg-storage-local`:
|
||||
![My Tanzu storage policy](/storage_policy.png)
|
||||
![My Tanzu storage policy](storage_policy.png)
|
||||
|
||||
So that's the vSphere side of things sorted; now to map that back to the Kubernetes side. For that, I'll need to define a Storage Class tied to the vSphere Storage profile so I drop these details into a new file called `vsphere-sc.yaml`:
|
||||
```yaml
|
||||
|
@ -566,7 +566,7 @@ vsphere-demo-1 Bound pvc-36cc7c01-a1b3-4c1c-ba0d-dff3fd47f93b 5Gi
|
|||
```
|
||||
|
||||
And for bonus points, I can see that the container volume was created on the vSphere side:
|
||||
![Container Volume in vSphere](/container_volume_in_vsphere.png)
|
||||
![Container Volume in vSphere](container_volume_in_vsphere.png)
|
||||
|
||||
So that's storage sorted. I'll clean up my test volume before moving on:
|
||||
```bash
|
||||
|
@ -937,24 +937,24 @@ replicaset.apps/phpipam-www-769c95c68d 1 1 1 5m59s
|
|||
```
|
||||
|
||||
And I can point my browser to the `EXTERNAL-IP` associated with the `phpipam-www` service to see the initial setup page:
|
||||
![phpIPAM installation page](/phpipam_install_page.png)
|
||||
![phpIPAM installation page](phpipam_install_page.png)
|
||||
|
||||
I'll click the **New phpipam installation** option to proceed to the next step:
|
||||
![Database initialization options](/phpipam_database_install_options.png)
|
||||
![Database initialization options](phpipam_database_install_options.png)
|
||||
|
||||
I'm all for easy so I'll opt for **Automatic database installation**, which will prompt me for the credentials of an account with rights to create a new database within the MariaDB instance. I'll enter `root` and the password I used for the `MYSQL_ROOT_PASSWORD` variable above:
|
||||
![Automatic database install](/phpipam_automatic_database_install.png)
|
||||
![Automatic database install](phpipam_automatic_database_install.png)
|
||||
|
||||
I click the **Install database** button and I'm then met with a happy success message saying that the `phpipam` database was successfully created.
|
||||
|
||||
And that eventually gets me to the post-install screen, where I set an admin password and proceed to log in:
|
||||
![We made it to the post-install!](/phpipam_post_install.png)
|
||||
![We made it to the post-install!](phpipam_post_install.png)
|
||||
|
||||
To create a new scan agent, I go to **Menu > Administration > Server management > Scan agents**.
|
||||
![Scan agents screen](/scan_agents.png)
|
||||
![Scan agents screen](scan_agents.png)
|
||||
|
||||
And click the button to create a new one:
|
||||
![Creating a new agent](/create_new_agent.png)
|
||||
![Creating a new agent](create_new_agent.png)
|
||||
|
||||
I'll copy the agent code and plug it into my `phpipam-agent.yaml` file:
|
||||
```yaml
|
||||
|
@ -969,18 +969,18 @@ deployment.apps/phpipam-agent created
|
|||
```
|
||||
|
||||
The scan agent isn't going to do anything until it's assigned to a subnet though, so now I head to **Administration > IP related management > Sections**. phpIPAM comes with a few default sections and ranges and such defined so I'll delete those and create a new one that I'll call `Lab`.
|
||||
![Section management](/section_management.png)
|
||||
![Section management](section_management.png)
|
||||
|
||||
Now I can create a new subnet within the `Lab` section by clicking the **Subnets** menu, selecting the `Lab` section, and clicking **+ Add subnet**.
|
||||
![Empty subnets menu](/subnets_empty.png)
|
||||
![Empty subnets menu](subnets_empty.png)
|
||||
|
||||
I'll define the new subnet as `192.168.1.0/24`. Once I enable the option to *Check hosts status*, I'll then be able to specify my new `remote-agent` as the scanner for this subnet.
|
||||
![Creating a new subnet](/creating_new_subnet.png)
|
||||
![A new (but empty) subnet](/new_subnet_pre_scan.png)
|
||||
![Creating a new subnet](creating_new_subnet.png)
|
||||
![A new (but empty) subnet](new_subnet_pre_scan.png)
|
||||
|
||||
It shows the scanner associated with the subnet, but no data yet. I'll need to wait a few minutes for the first scan to kick off (at the five-minute interval I defined in the configuration).
|
||||
![](/five_minutes.gif)
|
||||
![Newly discovered IPs!](/newly-discovered_IPs.png)
|
||||
![](five_minutes.gif)
|
||||
![Newly discovered IPs!](newly-discovered_IPs.png)
|
||||
|
||||
Woah, it actually works!
|
||||
|
||||
|
|
Loading…
Reference in a new issue