mirror of
https://github.com/jbowdre/virtuallypotato.git
synced 2024-11-21 22:42:19 +00:00
update posts with image descriptions and formatting fixes
This commit is contained in:
parent
57081c3c2a
commit
2e8e5d58a5
6 changed files with 47 additions and 47 deletions
|
@ -71,13 +71,13 @@ I can then go to Service Broker and drag the new fields onto the Custom Form can
|
|||
|
||||
### vRO workflow
|
||||
Okay, so I've got the information I want to pass on to vCenter. Now I need to whip up a new workflow in vRO that will actually do that (after [telling vRO how to connect to the vCenter](/vra8-custom-provisioning-part-two#interlude-connecting-vro-to-vcenter), of course). I'll want to call this after the VM has been provisioned, so I'll cleverly call the workflow "VM Post-Provisioning".
|
||||
![image.png](/images/posts-2020/X9JhgWx8x.png)
|
||||
![Naming the new workflow](/images/posts-2020/X9JhgWx8x.png)
|
||||
|
||||
The workflow will have a single input from vRA, `inputProperties` of type `Properties`.
|
||||
![image.png](/images/posts-2020/zHrp6GPcP.png)
|
||||
![Workflow input](/images/posts-2020/zHrp6GPcP.png)
|
||||
|
||||
The first thing this workflow needs to do is parse `inputProperties (Properties)` to get the name of the VM, and it will then use that information to query vCenter and grab the corresponding VM object. So I'll add a scriptable task item to the workflow canvas and call it `Get VM Object`. It will take `inputProperties (Properties)` as its sole input, and output a new variable called `vm` of type `VC:VirtualMachine`.
|
||||
![image.png](/images/posts-2020/5ATk99aPW.png)
|
||||
![Get VM Object action](/images/posts-2020/5ATk99aPW.png)
|
||||
|
||||
The script for this task is fairly straightforward:
|
||||
```js
|
||||
|
@ -93,7 +93,7 @@ vm = vms[0]
|
|||
```
|
||||
|
||||
I'll add another scriptable task item to the workflow to actually apply the notes to the VM - I'll call it `Set Notes`, and it will take both `vm (VC:VirtualMachine)` and `inputProperties (Properties)` as its inputs.
|
||||
![image.png](/images/posts-2020/w24V6YVOR.png)
|
||||
![Set Notes action](/images/posts-2020/w24V6YVOR.png)
|
||||
|
||||
The first part of the script creates a new VM config spec, inserts the description into the spec, and then reconfigures the selected VM with the new spec.
|
||||
|
||||
|
@ -118,17 +118,17 @@ System.getModule("com.vmware.library.vc.customattribute").setOrCreateCustomField
|
|||
|
||||
### Extensibility subscription
|
||||
Now I need to return to Cloud Assembly and create a new extensibility subscription that will call this new workflow at the appropriate time. I'll call it "VM Post-Provisioning" and attach it to the "Compute Post Provision" topic.
|
||||
![image.png](/images/posts-2020/PmhVOWJsUn.png)
|
||||
![Creating the new subscription](/images/posts-2020/PmhVOWJsUn.png)
|
||||
|
||||
And then I'll link it to my new workflow:
|
||||
![image.png](/images/posts-2020/cEbWSOg00.png)
|
||||
![Selecting the workflow](/images/posts-2020/cEbWSOg00.png)
|
||||
|
||||
### Testing
|
||||
And then back to Service Broker to request a VM and see if it works:
|
||||
|
||||
![image.png](/images/posts-2020/Lq9DBCK_Y.png)
|
||||
![Test request](/images/posts-2020/Lq9DBCK_Y.png)
|
||||
|
||||
It worked!
|
||||
![image.png](/images/posts-2020/-Fuvz-GmF.png)
|
||||
![New VM with notes](/images/posts-2020/-Fuvz-GmF.png)
|
||||
|
||||
In the future, I'll be exploring more features that I can add on to this "VM Post-Provisioning" workflow like creating static DNS records as needed.
|
||||
|
|
|
@ -100,34 +100,34 @@ Now would also be a good time to go ahead and enable cron jobs so that phpIPAM w
|
|||
Okay, let's now move on to the phpIPAM web-based UI to continue the setup. After logging in at `https://ipam.lab.bowdre.net/`, I clicked on the red **Administration** menu at the right side and selected **phpIPAM Settings**. Under the **Site Settings** section, I enabled the *Prettify links* option, and under the **Feature Settings** section I toggled on the *API* component. I then hit *Save* at the bottom of the page to apply the changes.
|
||||
|
||||
Next, I went to the **Users** item on the left-hand menu to create a new user account which will be used by vRA. I named it `vra`, set a password for the account, and made it a member of the `Operators` group, but didn't grant any special module access.
|
||||
![Screenshot 2021-02-20 14.18.47.png](/images/posts-2020/DiqyOlf5S.png)
|
||||
![Screenshot 2021-02-20 14.20.49.png](/images/posts-2020/QoxVKC11t.png)
|
||||
![Creating vRA service account in phpIPAM](/images/posts-2020/DiqyOlf5S.png)
|
||||
![Creating vRA service account in phpIPAM](/images/posts-2020/QoxVKC11t.png)
|
||||
|
||||
The last step in configuring API access is to create an API key. This is done by clicking the **API** item on that left side menu and then selecting *Create API key*. I gave it the app ID `vra`, granted Read/Write permissions, and set the *App Security* option to "SSL with User token".
|
||||
![Screenshot 2021-02-20 14.23.50.png](/images/posts-2020/-aPGJhSvz.png)
|
||||
![Generating the API key](/images/posts-2020/-aPGJhSvz.png)
|
||||
|
||||
Once we get things going, our API calls will authenticate with the username and password to get a token and bind that to the app ID.
|
||||
|
||||
### Step 2: Configuring phpIPAM subnets
|
||||
Our fancy new IPAM solution is ready to go - except for the whole bit about managing IPs. We need to tell it about the network segments we'd like it to manage. phpIPAM uses "Sections" to group subnets together, so we start by creating a new Section at **Administration > IP related management > Sections**. I named my new section `Lab`, and pretty much left all the default options. Be sure that the `Operators` group has read/write access to this section and the subnets we're going to create inside it!
|
||||
![Screenshot 2021-02-20 14.33.39.png](/images/posts-2020/6yo39lXI7.png)
|
||||
![Creating a section to hold the subnets](/images/posts-2020/6yo39lXI7.png)
|
||||
|
||||
We should also go ahead and create a Nameserver set so that phpIPAM will be able to tell its clients (vRA) what server(s) to use for DNS. Do this at **Administration > IP related management > Nameservers**. I created a new entry called `Lab` and pointed it at my internal DNS server, `192.168.1.5`.
|
||||
![Screenshot 2021-02-20 14.40.57.png](/images/posts-2020/pDsEh18bx.png)
|
||||
![Designating the nameserver](/images/posts-2020/pDsEh18bx.png)
|
||||
|
||||
Okay, we're finally ready to start entering our subnets at **Administration > IP related management > Subnets**. For each one, I entered the Subnet in CIDR format, gave it a useful description, and associated it with my `Lab` section. I expanded the *VLAN* dropdown and used the *Add new VLAN* option to enter the corresponding VLAN information, and also selected the Nameserver I had just created.
|
||||
![Screenshot 2021-02-20 14.44.20.png](/images/posts-2020/-PHf9oUyM.png)
|
||||
![Entering the first subnet](/images/posts-2020/-PHf9oUyM.png)
|
||||
I also enabled the options *Mark as pool*, *Check hosts status*, *Discover new hosts*, and *Resolve DNS names*.
|
||||
![Screenshot 2021-02-20 15.03.13.png](/images/posts-2020/SR7oD0jsG.png)
|
||||
![Subnet options](/images/posts-2020/SR7oD0jsG.png)
|
||||
|
||||
I then used the *Scan subnets for new hosts* button to run a discovery scan against the new subnet.
|
||||
![Screenshot 2021-02-20 15.06.41.png](/images/posts-2020/4WQ8HWJ2N.png)
|
||||
![Scanning for new hosts](/images/posts-2020/4WQ8HWJ2N.png)
|
||||
|
||||
The scan only found a single host, `172.16.20.1`, which is the subnet's gateway address hosted by the Vyos router. I used the pencil icon to edit the IP and mark it as the gateway:
|
||||
![Screenshot 2021-02-20 15.08.43.png](/images/posts-2020/2otDJvqRP.png)
|
||||
![Identifying the gateway](/images/posts-2020/2otDJvqRP.png)
|
||||
|
||||
phpIPAM now knows the network address, mask, gateway, VLAN, and DNS configuration for this subnet - all things that will be useful for clients seeking an address. I then repeated these steps for the remaining subnets.
|
||||
![Screenshot 2021-02-20 15.13.38.png](/images/posts-2020/09RIXJc12.png)
|
||||
![More subnets!](/images/posts-2020/09RIXJc12.png)
|
||||
|
||||
Now for the *real* fun!
|
||||
|
||||
|
@ -351,10 +351,10 @@ try:
|
|||
You can view the full code [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/validate_endpoint/source.py).
|
||||
|
||||
After completing each operation, run `mvn package -PcollectDependencies -Duser.id=${UID}` to build again, and then import the package to vRA again. This time, you'll see the new "API App ID" field on the form:
|
||||
![Screenshot 2021-02-21 16.30.33.png](/images/posts-2020/bpx8iKUHF.png)
|
||||
![Validating the new IPAM endpoint](/images/posts-2020/bpx8iKUHF.png)
|
||||
|
||||
Confirm that everything worked correctly by hopping over to the **Extensibility** tab, selecting **Action Runs** on the left, and changing the **User Runs** filter to say *Integration Runs*.
|
||||
![Screenshot 2021-02-21 19.18.43.png](/images/posts-2020/e4PTJxfqH.png)
|
||||
![Extensibility action runs](/images/posts-2020/e4PTJxfqH.png)
|
||||
Select the newest `phpIPAM_ValidateEndpoint` action and make sure it has a happy green *Completed* status. You can also review the Inputs to make sure they look like what you expected:
|
||||
```json
|
||||
{
|
||||
|
@ -503,7 +503,7 @@ vRA runs the `phpIPAM_GetIPRanges` action about every ten minutes so keep checki
|
|||
Note that it *did not* pick up my "Home Network" range since it wasn't set to be a pool.
|
||||
|
||||
We can also navigate to **Infrastructure > Networks > IP Ranges** to view them in all their glory:
|
||||
![Screenshot 2021-02-21 17.49.12.png](/images/posts-2020/7_QI-Ti8g.png)
|
||||
![Reviewing the discovered IP ranges](/images/posts-2020/7_QI-Ti8g.png)
|
||||
|
||||
You can then follow [these instructions](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-410899CA-1B02-4507-96AD-DFE622D2DD47.html) to associate the external IP ranges with networks available for vRA deployments.
|
||||
|
||||
|
@ -637,7 +637,7 @@ The full `allocate_ip` code is [here](https://github.com/jbowdre/phpIPAM-for-vRA
|
|||
[2021-02-22 01:31:41,790] [INFO] - Successfully reserved ['172.16.40.2'] for BOW-VLTST-XXX41.
|
||||
```
|
||||
You can also check for a reserved address in phpIPAM:
|
||||
![Screenshot 2021-02-21 19.32.38.png](/images/posts-2020/3BQnEd0bY.png)
|
||||
![The reserved address in phpIPAM](/images/posts-2020/3BQnEd0bY.png)
|
||||
|
||||
Almost done!
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ It's a good idea to take a snapshot of your virtual appliances before applying a
|
|||
|
||||
*(Yes, that's a lesson I learned the hard way - and warnings about that are tragically hard to come by from what I've seen. So I'm sharing my notes so that you can avoid making the same mistake.)*
|
||||
|
||||
![Screenshot 2021-01-30 16.09.02.png](/images/posts-2020/XTaU9VDy8.png)
|
||||
![Viewing replication status of linked vCenters](/images/posts-2020/XTaU9VDy8.png)
|
||||
|
||||
Take these steps when you need to snapshot linked vCenters to avoid breaking replication:
|
||||
|
||||
|
|
|
@ -11,21 +11,21 @@ toc: false
|
|||
---
|
||||
|
||||
I recently ran into a peculiar issue after upgrading my vRealize Automation homelab to the new 8.3 release, and the error message displayed in the UI didn't give me a whole lot of information to work with:
|
||||
![Screenshot 2021-02-18 10.27.41.png](/images/posts-2020/IL29_Shlg.png)
|
||||
![Unfortunately my 'Essential Googling The Error Message' O'RLY book was no help with making the bad words go away](/images/posts-2020/IL29_Shlg.png)
|
||||
|
||||
I connected to the vRA appliance to try to find the relevant log excerpt, but [doing so isn't all that straightforward](https://www.stevenbright.com/2020/01/vmware-vrealize-automation-8-0-logs/#:~:text=Access%20Logs%20from%20the%20CLI) given the containerized nature of the services.
|
||||
So instead I used the `vracli log-bundle` command to generate a bundle of all relevant logs, and I then transferred the resulting (2.2GB!) `log-bundle.tar` to my workstation for further investigation. I expanded the tar and ran `tree -P '*.log'` to get a quick idea of what I've got to deal with:
|
||||
![Screenshot 2021-02-18 11.01.56.png](/images/posts-2020/wAa9KjBHO.png)
|
||||
![That's a lot of logs!](/images/posts-2020/wAa9KjBHO.png)
|
||||
Ugh. Even if I knew which logs I wanted to look at (and I don't) it would take ages to dig through all of this. There's got to be a better way.
|
||||
|
||||
And there is! Visual Studio Code lets you open an entire directory tree in the editor:
|
||||
![Screenshot 2021-02-18 12.19.17.png](/images/posts-2020/SBKtJ8K1p.png)
|
||||
![Directory opened in VS Code](/images/posts-2020/SBKtJ8K1p.png)
|
||||
|
||||
You can then "Find in Files" with `Ctrl`+`Shift`+`F`, and VS Code will *very* quickly search through all the files to find what you're looking for:
|
||||
![Screenshot 2021-02-18 12.25.01.png](/images/posts-2020/PPZu_UOGO.png)
|
||||
![Searching all files](/images/posts-2020/PPZu_UOGO.png)
|
||||
|
||||
You can also click the "Open in editor" link at the top of the search results to open the matching snippets in a single view:
|
||||
![Screenshot 2021-02-18 12.31.46.png](/images/posts-2020/kJ_l7gPD2.png)
|
||||
![All the matching strings together](/images/posts-2020/kJ_l7gPD2.png)
|
||||
|
||||
Adjusting the number at the far top right of that view will dynamically tweak how many context lines are included with each line containing the search term.
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ title: VMware Home Lab on Intel NUC 9
|
|||
|
||||
I picked up an Intel NUC 9 Extreme kit a few months back (thanks, VMware!) and have been slowly tinkering with turning it into an extremely capable self-contained home lab environment. I'm pretty happy with where things sit right now so figured it was about time to start documenting and sharing what I've done.
|
||||
|
||||
![Screenshot 2020-12-23 at 12.30.07.png](/images/posts-2020/SIDah-Lag.png)
|
||||
![But boy would I love some more RAM](/images/posts-2020/SIDah-Lag.png)
|
||||
|
||||
### Hardware
|
||||
*(Caution: here be affiliate links)*
|
||||
|
@ -37,27 +37,27 @@ I'm leveraging my $200 [vMUG Advantage subscription](https://www.vmug.com/member
|
|||
#### Setting up the NUC
|
||||
The NUC connects to my home network through its onboard gigabit Ethernet interface (`vmnic0`). (The NUC does have a built-in WiFi adapter but for some reason VMware hasn't yet allowed their hypervisor to connect over WiFi - weird, right?) I wanted to use a small 8GB thumbdrive as the host's boot device so I installed that in one of the NUC's internal USB ports. For the purpose of installation, I connected a keyboard and monitor to the NUC, and I configured the BIOS to automatically boot up when power is restored after a power failure.
|
||||
|
||||
I used the Chromebook Recovery Utility to write the ESXi installer ISO to *another* USB drive (how-to [here](burn-an-iso-to-usb-with-the-chromebook-recovery-utility)), inserted that bootable drive to a port on the front of the NUC, and booted the NUC from the drive. Installing ESXi 7.0u1 was as easy as it could possibly be. All hardware was automatically detected and the appropriate drivers loaded. Once the host booted up, I used the DCUI to configure a static IP address (`192.168.1.11`). I then shut down the NUC, disconnected the keyboard and monitor, and moved it into the cabinet where it will live out its headless existence.
|
||||
I used the Chromebook Recovery Utility to write the ESXi installer ISO to *another* USB drive (how-to [here](/burn-an-iso-to-usb-with-the-chromebook-recovery-utility)), inserted that bootable drive to a port on the front of the NUC, and booted the NUC from the drive. Installing ESXi 7.0u1 was as easy as it could possibly be. All hardware was automatically detected and the appropriate drivers loaded. Once the host booted up, I used the DCUI to configure a static IP address (`192.168.1.11`). I then shut down the NUC, disconnected the keyboard and monitor, and moved it into the cabinet where it will live out its headless existence.
|
||||
|
||||
I was then able to point my web browser to `https://192.168.1.11/ui/` to log in to the host and get down to business. First stop: networking. For now, I only need a single standard switch (`vSwitch0`) with two portgroups: one for the host's vmkernel interface, and the other for the VMs (including the nested ESXi appliances) that are going to run directly on this physical host. The one "gotcha" when working with a nested environment is that you'll need to edit the virtual switch's security settings to "Allow promiscuous mode" and "Allow forged transmits" (for reasons described [here](https://williamlam.com/2013/11/why-is-promiscuous-mode-forged.html)).
|
||||
![ink (2).png](/images/posts-2020/w0HeFSi7Q.png)
|
||||
![Allowing promiscuous mode and forged transmits](/images/posts-2020/w0HeFSi7Q.png)
|
||||
|
||||
I created a single datastore to span the entirety of that 1TB NVMe drive. The nested ESXi hosts will use VMDKs stored here to provide storage to the nested VMs.
|
||||
![Screenshot 2020-12-28 at 12.24.57.png](/images/posts-2020/XDe98S4Fx.png)
|
||||
![The new datastore](/images/posts-2020/XDe98S4Fx.png)
|
||||
|
||||
#### Domain Controller
|
||||
I created a new Windows VM with 2 vCPUs, 4GB of RAM, and a 90GB virtual hard drive, and I booted it off a [Server 2019 evaluation ISO](https://www.microsoft.com/en-US/evalcenter/evaluate-windows-server-2019?filetype=ISO). I gave it a name, a static IP address, and proceeded to install and configure the Active Directory Domain Services and DNS Server roles. I created static A and PTR records for the vCenter Server Appliance I'd be deploying next (`vcsa.`) and the physical host (`nuchost.`). I configured ESXi to use this new server for DNS resolutions, and confirmed that I could resolve the VCSA's name from the host.
|
||||
|
||||
![Screenshot 2020-12-30 at 13.10.58.png](/images/posts-2020/4o5bqRiTJ.png)
|
||||
![AD and DNS](/images/posts-2020/4o5bqRiTJ.png)
|
||||
|
||||
Before moving on, I installed the Chrome browser on this new Windows VM and also set up remote access via [Chrome Remote Desktop](https://remotedesktop.google.com/access/). This will let me remotely access and manage my lab environment without having to punch holes in the router firewall (or worry about securing said holes). And it's got "chrome" in the name so it will work just fine from my Chromebooks!
|
||||
|
||||
#### vCenter
|
||||
I attached the vCSA installation ISO to the Windows VM and performed the vCenter deployment from there. (See, I told you that Chrome Remote Desktop would come in handy!)
|
||||
![Screenshot 2020-12-30 at 14.51.09.png](/images/posts-2020/OOP_lstyM.png)
|
||||
![vCenter deployment process](/images/posts-2020/OOP_lstyM.png)
|
||||
|
||||
After the vCenter was deployed and the basic configuration completed, I created a new cluster to contain the physical host. There's likely only ever going to be the one physical host but I like being able to logically group hosts in this way, particularly when working with PowerCLI. I then added the host to the vCenter by its shiny new FQDN.
|
||||
![Screenshot 2021-01-05 10.39.54.png](/images/posts-2020/Wu3ZIIVTs.png)
|
||||
![Shiny new cluser](/images/posts-2020/Wu3ZIIVTs.png)
|
||||
|
||||
I've now got a fully-functioning VMware lab, complete with a physical hypervisor to run the workloads, a vCenter server to manage the workloads, and a Windows DNS server to tell the workloads how to talk to each other. Since the goal is to ultimately simulate a (small) production environment, let's set up some additional networking before we add anything else.
|
||||
|
||||
|
@ -85,7 +85,7 @@ Of course, not everything that I'm going to deploy in the lab will need to be ac
|
|||
#### vSwitch1
|
||||
I'll start by adding a second vSwitch to the physical host. It doesn't need a physical adapter assigned since this switch will be for internal traffic. I create two port groups: one tagged for the VLAN 1610 Management traffic, which will be useful for attaching VMs on the physical host to the internal network; and the second will use VLAN 4095 to pass all VLAN traffic to the nested ESXi hosts. And again, this vSwitch needs to have its security policy set to allow Promiscuous Mode and Forged Transmits. I also set the vSwitch to support an MTU of 9000 so I can use Jumbo Frames on the vMotion and vSAN networks.
|
||||
|
||||
![Screenshot 2021-01-05 16.37.57.png](/images/posts-2020/7aNJa2Hlm.png)
|
||||
![Second vSwitch](/images/posts-2020/7aNJa2Hlm.png)
|
||||
|
||||
#### VyOS
|
||||
Wouldn't it be great if the VMs that are going to be deployed on those `1610`, `1620`, and `1630` VLANs could still have their traffic routed out of the internal networks? But doing routing requires a router (or so my network friends tell me)... so I deployed a VM running the open-source VyOS router platform. I used [William Lam's instructions for installing VyOS](https://williamlam.com/2020/02/how-to-automate-the-creation-multiple-routable-vlans-on-single-l2-network-using-vyos.html), making sure to attach the first network interface to the Home-Network portgroup and the second to the Isolated portgroup (VLAN 4095). I then set to work [configuring the router](https://docs.vyos.io/en/latest/quick-start.html).
|
||||
|
@ -189,24 +189,24 @@ Alright, it's time to start building up the nested environment. To start, I grab
|
|||
|
||||
Deploying the virtual appliances is just like any other "Deploy OVF Template" action. I placed the VMs on the `physical-cluster` compute resource, and selected to thin provision the VMDKs on the local datastore. I chose the "Isolated" VM network which uses VLAN 4095 to make all the internal VLANs available on a single portgroup.
|
||||
|
||||
![Screenshot 2021-01-07 10.54.50.png](/images/posts-2020/zOJp-jqVb.png)
|
||||
![Deploying the nested ESXi OVF](/images/posts-2020/zOJp-jqVb.png)
|
||||
|
||||
And I set the networking properties accordingly:
|
||||
|
||||
![Screenshot 2021-01-07 11.09.36.png](/images/posts-2020/PZ6FzmJcx.png)
|
||||
![OVF networking settings](/images/posts-2020/PZ6FzmJcx.png)
|
||||
|
||||
These virtual appliances come with 3 hard drives. The first will be used as the boot device, the second for vSAN caching, and the third for vSAN capacity. I doubled the size of the second and third drives, to 8GB and 16GB respectively:
|
||||
|
||||
![Screenshot 2021-01-07 13.01.19.png](/images/posts-2020/nkdH7Jfxw.png)
|
||||
![OVF storage configuration](/images/posts-2020/nkdH7Jfxw.png)
|
||||
|
||||
After booting the new host VMs, I created a new cluster in vCenter and then added the nested hosts:
|
||||
![Screenshot 2021-01-07 13.28.03.png](/images/posts-2020/z8fvzu4Km.png)
|
||||
![New nested hosts added to a cluster](/images/posts-2020/z8fvzu4Km.png)
|
||||
|
||||
Next, I created a new Distributed Virtual Switch to break out the VLAN trunk on the nested host "physical" adapters into the individual VLANs I created on the VyOS router. Again, each port group will need to allow Promiscuous Mode and Forged Transmits, and I set the dvSwitch MTU size to 9000 (to support Jumbo Frames on the vSAN and vMotion portgroups).
|
||||
![Screenshot 2021-01-08 10.04.24.png](/images/posts-2020/arA7gurqh.png)
|
||||
![New dvSwitch for nested traffic](/images/posts-2020/arA7gurqh.png)
|
||||
|
||||
I migrated the physical NICs and `vmk0` to the new dvSwitch, and then created new vmkernel interfaces for vMotion and vSAN traffic on each of the nested hosts:
|
||||
![Screenshot 2021-01-19 10.03.27.png](/images/posts-2020/6-auEYd-W.png)
|
||||
![ESXi vmkernel interfaces](/images/posts-2020/6-auEYd-W.png)
|
||||
|
||||
I then ssh'd into the hosts and used `vmkping` to make sure they could talk to each other over these interfaces. I changed the vMotion interface to use the vMotion TCP/IP stack so needed to append the `-S vmotion` flag to the command:
|
||||
|
||||
|
@ -233,10 +233,10 @@ round-trip min/avg/max = 0.202/0.252/0.312 ms
|
|||
```
|
||||
|
||||
Okay, time to throw some vSAN on these hosts. Select the cluster object, go to the configuration tab, scroll down to vSAN, and click "Turn on vSAN". This will be a single site cluster, and I don't need to enable any additional services. When prompted, I claim the 8GB drives for the cache tier and the 16GB drives for capacity.
|
||||
![Screenshot 2021-01-23 17.35.34.png](/images/posts-2020/mw-rsq_1a.png)
|
||||
![Configuring vSAN](/images/posts-2020/mw-rsq_1a.png)
|
||||
|
||||
It'll take a few minutes for vSAN to get configured on the cluster.
|
||||
![Screenshot 2021-01-23 17.41.13.png](/images/posts-2020/mye0LdtNj.png)
|
||||
![vSAN capacity is.... not much, but it's a start](/images/posts-2020/mye0LdtNj.png)
|
||||
|
||||
Huzzah! Next stop:
|
||||
|
||||
|
@ -252,7 +252,7 @@ Anyhoo, each of these VMs will need to be resolvable in DNS so I started by crea
|
|||
|`vra.lab.bowdre.net`|`192.168.1.42`|
|
||||
|
||||
I then attached the installer ISO to my Windows VM and ran through the installation from there.
|
||||
![Screenshot 2021-02-05 16.28.41.png](/images/posts-2020/42n3aMim5.png)
|
||||
![vRealize Easy Installer](/images/posts-2020/42n3aMim5.png)
|
||||
|
||||
Similar to the vCenter deployment process, this one prompts you for all the information it needs up front and then takes care of everything from there. That's great news because this is a pretty long deployment; it took probably two hours from clicking the final "Okay, do it" button to being able to log in to my shiny new vRealize Automation environment.
|
||||
|
||||
|
|
|
@ -37,7 +37,7 @@ That's cool and all, and I could go ahead and request a deployment off of that
|
|||
![Customize form](/images/posts-2020/ZPsS0oZuc.png)
|
||||
|
||||
When you start out, the custom form kind of jumbles up the available fields. So I'm going to start by dragging-and-dropping the fields to resemble the order defined in the Cloud Template:
|
||||
![image.png](/images/posts-2020/oLwUg1k6T.png)
|
||||
![Starting to customize the custom form](/images/posts-2020/oLwUg1k6T.png)
|
||||
|
||||
In addition to rearranging the request form fields, Custom Forms also provide significant control over how the form behaves. You can change how a field is displayed, define default values, make fields dependent upon other fields and more. For instance, all of my templates and resources belong to a single project so making the user select the project (from a set of 1) is kind of redundant. Every deployment has to be tied to a project so I can't just remove that field, but I can select the "Project" field on the canvas and change its *Visibility* to "No" to hide it. It will silently pass along the correct project ID in the background without cluttering up the form.
|
||||
![Hiding the Project field](/images/posts-2020/4flvfGC54.png)
|
||||
|
@ -78,7 +78,7 @@ With that sorted, I can go back to the Service Broker interface to modify the cu
|
|||
![Linking the action](/images/posts-2020/mpbPukEeB.png)
|
||||
|
||||
The last step before testing is to click that *Enable* button to activate the custom form, and then the *Save* button to save my work. So did it work? Let's head to the *Catalog* tab and open the request:
|
||||
![Screen recording 2021-05-10 17.01.37.gif](/images/posts-2020/tybyj-5dG.gif)
|
||||
![Watching the deployment name change](/images/posts-2020/tybyj-5dG.gif)
|
||||
|
||||
Cool! So it's dynamically generating the deployment name based on selections made on the form. Now that it works, I can go back to the custom form and set the "Deployment Name" field to be invisible just like the "Project" one.
|
||||
|
||||
|
|
Loading…
Reference in a new issue