change image base path
|
@ -11,7 +11,7 @@ tags:
|
|||
title: BitWarden password manager self-hosted on free Google Cloud instance
|
||||
---
|
||||
|
||||
![Bitwarden login](/assets/images/posts-2020/i0UKdXleC.png)
|
||||
![Bitwarden login](/images/posts-2020/i0UKdXleC.png)
|
||||
|
||||
A friend mentioned the [BitWarden](https://bitwarden.com/) password manager to me yesterday and I had to confess that I'd never heard of it. I started researching it and was impressed by what I found: it's free, [open-source](https://github.com/bitwarden), feature-packed, fully cross-platform (with Windows/Linux/MacOS desktop clients, Android/iOS mobile apps, and browser extensions for Chrome/Firefox/Opera/Safari/Edge/etc), and even offers a self-hosted option.
|
||||
|
||||
|
|
|
@ -18,12 +18,12 @@ That's a pretty sweet setup, but I still needed a way to convert STL 3D models i
|
|||
Enter "Crostini," Chrome OS's [Linux (Beta) feature](https://chromium.googlesource.com/chromiumos/docs/+/master/containers_and_vms.md). It consists of a hardened Linux VM named `termina` which runs (by default) a Debian Buster LXD container named `penguin` (though you can spin up just about any container for which you can find an [image](https://us.images.linuxcontainers.org/)) and some fancy plumbing to let Chrome OS and Linux interact in specific clearly-defined ways. It's a brilliant balance between offering the flexibility of Linux while preserving Chrome OS's industry-leading security posture.
|
||||
|
||||
|
||||
![Screenshot 2020-09-14 at 10.41.47.png](/assets/images/posts-2020/lhTnVwCO3.png)
|
||||
![Screenshot 2020-09-14 at 10.41.47.png](/images/posts-2020/lhTnVwCO3.png)
|
||||
|
||||
There are plenty of great guides (like [this one](https://www.computerworld.com/article/3314739/linux-apps-on-chrome-os-an-easy-to-follow-guide.html)) on how to get started with Linux on Chrome OS so I won't rehash those steps here.
|
||||
|
||||
One additional step you will probably want to take is make sure that your Chromebook is configured to enable hyperthreading, as it may have [hyperthreading disabled by default](https://support.google.com/chromebook/answer/9340236). Just plug `chrome://flags/#scheduler-configuration` into Chrome's address bar, set it to `Enables Hyper-Threading on relevant CPUs`, and then click the button to restart your Chromebook. You'll thank me later.
|
||||
![Screenshot 2020-09-14 at 10.53.29.png](/assets/images/posts-2020/LHax6lAwh.png)
|
||||
![Screenshot 2020-09-14 at 10.53.29.png](/images/posts-2020/LHax6lAwh.png)
|
||||
|
||||
### The Software
|
||||
I settled on using [FreeCAD](https://www.freecadweb.org/) for parametric modeling and [Ultimaker Cura](https://ultimaker.com/software/ultimaker-cura) for my GCODE slicer, but unfortunately getting them working cleanly wasn't entirely straightforward.
|
||||
|
@ -65,7 +65,7 @@ Comment[de_DE]=Feature-basierter parametrischer Modellierer
|
|||
MimeType=application/x-extension-fcstd
|
||||
```
|
||||
That's it! Get on with your 3D-modeling bad self.
|
||||
![Screenshot 2020-09-14 at 10.40.23.png](/assets/images/posts-2020/qDTXt1jp3.png)
|
||||
![Screenshot 2020-09-14 at 10.40.23.png](/images/posts-2020/qDTXt1jp3.png)
|
||||
Now that you've got a model, be sure to [export it as an STL mesh](https://wiki.freecadweb.org/Export_to_STL_or_OBJ) so you can import it into your slicer.
|
||||
|
||||
#### Ultimaker Cura
|
||||
|
@ -85,12 +85,12 @@ $ sudo apt update && sudo apt install menulibre
|
|||
$ menulibre
|
||||
```
|
||||
Just plug in the relevant details (you can grab the appropriate icon [here](https://github.com/Ultimaker/Cura/blob/master/icons/cura-128.png)), hit the filing cabinet Save icon, and you should then be able to search for Cura from the Chrome OS launcher.
|
||||
![Screenshot 2020-09-14 at 11.00.47.png](/assets/images/posts-2020/VTISYOKHO.png)
|
||||
![Screenshot 2020-09-14 at 11.00.47.png](/images/posts-2020/VTISYOKHO.png)
|
||||
|
||||
![Screenshot 2020-09-14 at 10.40.38.png](/assets/images/posts-2020/f8nRJcyI6.png)
|
||||
![Screenshot 2020-09-14 at 10.40.38.png](/images/posts-2020/f8nRJcyI6.png)
|
||||
|
||||
From there, just import the STL mesh, configure the appropriate settings, slice, and save the resulting GCODE. You can then just upload the GCODE straight to The Spaghetti Detective and kick off the print.
|
||||
|
||||
![PXL_20200902_201747849.MP.jpg](/assets/images/posts-2020/2g57odtq2.jpeg)
|
||||
![PXL_20200902_201747849.MP.jpg](/images/posts-2020/2g57odtq2.jpeg)
|
||||
|
||||
Nice!
|
|
@ -14,7 +14,7 @@ I manage a large VMware environment spanning several individual vCenters, and I
|
|||
|
||||
I can, and here's how I do it.
|
||||
|
||||
![Annotation 2020-09-16 142625.png](/assets/images/posts-2020/LJOcy2oqc.png)
|
||||
![Annotation 2020-09-16 142625.png](/images/posts-2020/LJOcy2oqc.png)
|
||||
|
||||
### The Script
|
||||
The following Powershell script will let you define a list of vCenters to be accessed, securely store your credentials for each vCenter, log in to every vCenter with a single command, and also close the connections when they're no longer needed. It's also a great starting point for any other custom functions you'd like to incorporate into your PowerCLI sessions.
|
||||
|
|
|
@ -72,12 +72,12 @@ Run the installer, and make sure to tick the box for installing the WSL2 engine.
|
|||
|
||||
#### Step Three: Configure Docker Desktop
|
||||
Launch Docker Desktop from the Start menu, and you should be presented with this friendly prompt:
|
||||
![2020-09-22.png](/assets/images/posts-2020/lY2FTflbK.png)
|
||||
![2020-09-22.png](/images/posts-2020/lY2FTflbK.png)
|
||||
|
||||
Hit that big friendly "gimme WSL2" button. Then open the Docker Settings from the system tray, and make sure that **General > Use the WSL 2 based engine** is enabled. Now navigate to **Resources > WSL Integration**, confirm that **Enable integration with my default WSL distro** is enabled as well. Smash the "Apply & Restart" button if you've made any changes.
|
||||
|
||||
### Test it!
|
||||
Fire up a WSL session and confirm that everything is working with `docker run hello-world`:
|
||||
![2020-09-22 (1).png](/assets/images/posts-2020/8p-PSHx1R.png)
|
||||
![2020-09-22 (1).png](/images/posts-2020/8p-PSHx1R.png)
|
||||
|
||||
It's beautiful!
|
|
@ -13,25 +13,25 @@ Do you (like me) find yourself frequently searching for information within the s
|
|||
|
||||
### The basics
|
||||
Point your browser to `chrome://settings/searchEngines` to see which sites are registered as Custom Search Engines:
|
||||
![Screenshot 2020-09-24 at 09.51.07.png](/assets/images/posts-2020/RuIrsHDqC.png)
|
||||
![Screenshot 2020-09-24 at 09.51.07.png](/images/posts-2020/RuIrsHDqC.png)
|
||||
|
||||
Each of these search engine entries has three parts: a name ("Search engine"), a Keyword, and a Query URL. The "Search engine" title is just what will appear in the Omnibox when the search engine gets triggered, the Keyword is what you'll type in the Omnibox to trigger it, and the Query URL tells Chrome how to handle the search. All you have to do is type the keyword, hit your Tab key to activate the search, input your query, and hit Enter:
|
||||
![recording.gif](/assets/images/posts-2020/o_o7rt4pA.gif)
|
||||
![recording.gif](/images/posts-2020/o_o7rt4pA.gif)
|
||||
|
||||
For sites which register themselves automatically, the keyword is often set to something like `domain.tld` so it might make sense to assign it as something shorter or more descriptive.
|
||||
|
||||
The Query URL is basically just what appears in the address bar when you search the site directly, with `%s` placed where your query text would normally go. You can view these details for a given search entry by tapping the three-dot menu button and selecting "Edit", and you can manually create new entries by hitting that big friendly "Add" button:
|
||||
![Screenshot 2020-09-24 at 10.16.01.png](/assets/images/posts-2020/fmLDUWjia.png)
|
||||
![Screenshot 2020-09-24 at 10.16.01.png](/images/posts-2020/fmLDUWjia.png)
|
||||
|
||||
By searching the site directly, you might find that it supports additional search filters which get appended to the URL:
|
||||
![Screenshot 2020-09-24 at 10.35.08.png](/assets/images/posts-2020/iHsYd7lbw.png)
|
||||
![Screenshot 2020-09-24 at 10.35.08.png](/images/posts-2020/iHsYd7lbw.png)
|
||||
|
||||
You can add those filters to the Query URL to further customize your Custom Search Engine:
|
||||
![Screenshot 2020-09-24 at 10.38.18.png](/assets/images/posts-2020/EBkQTGmNb.png)
|
||||
![Screenshot 2020-09-24 at 10.38.18.png](/images/posts-2020/EBkQTGmNb.png)
|
||||
|
||||
I spend a lot of my free time helping out on Google's support forums as a part of their [Product Experts program](https://productexperts.withgoogle.com/what-it-is), and I often need to quickly look up a Help Center article or previous forum discussion to assist users. I created a set of Custom Search Engines to make that easier:
|
||||
![Screenshot 2020-09-24 at 10.42.57.png](/assets/images/posts-2020/630ix7uVw.png)
|
||||
![Screenshot 2020-09-24 at 10.45.54.png](/assets/images/posts-2020/V3qLmfi50.png)
|
||||
![Screenshot 2020-09-24 at 10.42.57.png](/images/posts-2020/630ix7uVw.png)
|
||||
![Screenshot 2020-09-24 at 10.45.54.png](/images/posts-2020/V3qLmfi50.png)
|
||||
|
||||
------
|
||||
|
||||
|
@ -40,21 +40,21 @@ Even if the site doesn't have a built-in native search, you can leverage Google'
|
|||
```
|
||||
http://google.com/search?q=%s&sitesearch=man7.org%2Flinux%2Fman-pages
|
||||
```
|
||||
![Screenshot 2020-09-24 at 10.51.17.png](/assets/images/posts-2020/EkmgtRYN4.png)
|
||||
![recording (4).gif](/assets/images/posts-2020/YKADY8YQR.gif)
|
||||
![Screenshot 2020-09-24 at 10.51.17.png](/images/posts-2020/EkmgtRYN4.png)
|
||||
![recording (4).gif](/images/posts-2020/YKADY8YQR.gif)
|
||||
|
||||
------
|
||||
|
||||
### Speak foreign to me
|
||||
This works for pretty much any site which parses the URL to render certain content. I use this for getting words/phrases instantly translated:
|
||||
![Screenshot 2020-09-24 at 11.21.58.png](/assets/images/posts-2020/ELly_F6x6.png)
|
||||
![recording (2).gif](/assets/images/posts-2020/1LDP5zxCU.gif)
|
||||
![Screenshot 2020-09-24 at 11.21.58.png](/images/posts-2020/ELly_F6x6.png)
|
||||
![recording (2).gif](/images/posts-2020/1LDP5zxCU.gif)
|
||||
|
||||
------
|
||||
|
||||
### Shorter shortcuts
|
||||
Your Query URL doesn't even need to include a query at all! You can use the Custom Search Engines as a sort of hyper-fast shortcut to pages you visit frequently. If I create a new entry with the Keyword `searchax` and `abusing-chromes-custom-search-engines-for-fun-and-profit` as the query URL, I can quickly open to this page by typing `searchax[tab][enter]`:
|
||||
![Screenshot 2020-09-24 at 12.10.28.png](/assets/images/posts-2020/YilNCaHil.png)
|
||||
![Screenshot 2020-09-24 at 12.10.28.png](/images/posts-2020/YilNCaHil.png)
|
||||
|
||||
I use that trick pretty regularly for getting back to vCenter appliance management interfaces without having to type out the full FQDN and port number and all that.
|
||||
|
||||
|
@ -66,7 +66,7 @@ You can do some other creative stuff too, like speedily accessing a temporary sc
|
|||
data:text/html;charset=utf-8, <title>Scratchpad</title><style>body {padding: 5%; font-size: 1.5em; font-family: Arial; }"></style><link rel="shortcut icon" href="https://ssl.gstatic.com/docs/documents/images/kix-favicon6.ico"/><body OnLoad='document.body.focus();' contenteditable spellcheck="true" >
|
||||
```
|
||||
And give it a nice short keyword - like the single letter 's':
|
||||
![recording (3).gif](/assets/images/posts-2020/h6dUCApdV.gif)
|
||||
![recording (3).gif](/images/posts-2020/h6dUCApdV.gif)
|
||||
|
||||
------
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@ The instructions worked well for me so I won't rehash them all here. When it cam
|
|||
All I need to do now is execute `sudo ./wsl-vpnkit` and leave that running in the background when I need to use WSL while connected to the corporate VPN.
|
||||
|
||||
|
||||
![Annotation 2020-10-07 083947.png](/assets/images/posts-2020/MnmMuA0HC.png)
|
||||
![Annotation 2020-10-07 083947.png](/images/posts-2020/MnmMuA0HC.png)
|
||||
|
||||
Whew! Okay, back to work.
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ title: Setting up Linux on a new Lenovo Chromebook Duet (bonus arm64 complicatio
|
|||
|
||||
I've [written in the past](3d-modeling-and-printing-on-chrome-os) about the Linux setup I've been using on my Pixel Slate. My Slate's keyboard stopped working over the weekend, though, and there don't seem to be any replacements (either Google or Brydge) to be found. And then I saw that [Walmart had the 64GB Lenovo Chromebook Duet temporarily marked down](https://twitter.com/johndotbowdre/status/1320733614426988544) to a mere $200 - just slightly more than the Slate's *keyboard* originally cost. So I jumped on that deal, and the little Chromeblet showed up today.
|
||||
|
||||
![PXL_20201027_154908725.PORTRAIT.jpg](/assets/images/posts-2020/kULHPeDuc.jpeg)
|
||||
![PXL_20201027_154908725.PORTRAIT.jpg](/images/posts-2020/kULHPeDuc.jpeg)
|
||||
|
||||
I'll be putting the Duet through the paces in the coming days to see if/how it can replace my now-tablet-only Slate, but first things first: I need Linux. And this may be a little bit different than the setup on the Slate since the Duet's Mediatek processor uses the aarch64/arm64 architecture instead of amd64.
|
||||
|
||||
|
@ -23,16 +23,16 @@ So journey with me as I get this little guy set up!
|
|||
|
||||
### Installing Linux
|
||||
This part is dead simple. Just head into **Settings > Linux (Beta)** and hit the **Turn on** button:
|
||||
![Screenshot 2020-10-27 at 15.59.12.png](/assets/images/posts-2020/oLso9Wyzj.png)
|
||||
![Screenshot 2020-10-27 at 15.59.12.png](/images/posts-2020/oLso9Wyzj.png)
|
||||
|
||||
Click **Next**, review the options for username and initial disk size (which can be easily increased later so there's no real need to change it right now), and then select **Install**:
|
||||
![Screenshot 2020-10-27 at 16.01.19.png](/assets/images/posts-2020/ACUKsohq6.png)
|
||||
![Screenshot 2020-10-27 at 16.01.19.png](/images/posts-2020/ACUKsohq6.png)
|
||||
|
||||
It takes just a few minutes to download and initialize the `termina` VM and then create the default `penguin` container:
|
||||
![Screenshot 2020-10-27 at 16.04.07.png](/assets/images/posts-2020/2LTaCEdWH.png)
|
||||
![Screenshot 2020-10-27 at 16.04.07.png](/images/posts-2020/2LTaCEdWH.png)
|
||||
|
||||
You're ready to roll once the Terminal opens and gives you a prompt:
|
||||
![Screenshot 2020-10-27 at 16.05.23.png](/assets/images/posts-2020/0-h1flLZs.png)
|
||||
![Screenshot 2020-10-27 at 16.05.23.png](/images/posts-2020/0-h1flLZs.png)
|
||||
|
||||
Your first action should be to go ahead and install any patches:
|
||||
```shell
|
||||
|
@ -56,7 +56,7 @@ Review it if you'd like, and then execute it:
|
|||
sh install.sh
|
||||
```
|
||||
When asked if you'd like to change your default shell to `zsh` now, **say no**. This is because it will prompt for your password, but you probably don't have a password set on your brand-new Linux (Beta) account and that just makes things complicated. We'll clear this up later, but for now just check out that slick new prompt:
|
||||
![Screenshot 2020-10-27 at 16.30.01.png](/assets/images/posts-2020/8q-WT0AyC.png)
|
||||
![Screenshot 2020-10-27 at 16.30.01.png](/images/posts-2020/8q-WT0AyC.png)
|
||||
|
||||
Oh My Zsh is pretty handy because you can easily enable [additional plugins](https://github.com/ohmyzsh/ohmyzsh/tree/master/plugins) to make your prompt behave exactly the way you want it to. Let's spruce it up even more with the [powerlevel10k theme](https://github.com/romkatv/powerlevel10k)!
|
||||
```shell
|
||||
|
@ -71,11 +71,11 @@ We'll need to launch another instance of `zsh` for the theme change to take effe
|
|||
sudo chsh -s /bin/zsh [username]
|
||||
```
|
||||
Now close out the terminal and open it again, and you should be met by the powerlevel10k configurator which will walk you through getting things set up:
|
||||
![Screenshot 2020-10-27 at 16.47.02.png](/assets/images/posts-2020/K1ScSuWcg.png)
|
||||
![Screenshot 2020-10-27 at 16.47.02.png](/images/posts-2020/K1ScSuWcg.png)
|
||||
|
||||
This theme is crazy-configurable, but fortunately the configurator wizard does a great job of helping you choose the options that work best for you.
|
||||
I pick the Classic prompt style, Unicode character set, Dark prompt color, 24-hour time, Angled separators, Sharp prompt heads, Flat prompt tails, 2-line prompt height, Dotted prompt connection, Right prompt frame, Sparse prompt spacing, Fluent prompt flow, Enabled transient prompt, Verbose instant prompt, and (finally) Yes to apply the changes.
|
||||
![New P10k prompt](/assets/images/posts-2021/08/20210804_p10k_prompt.png)
|
||||
![New P10k prompt](/images/posts-2021/08/20210804_p10k_prompt.png)
|
||||
Looking good!
|
||||
|
||||
### Visual Studio Code
|
||||
|
@ -85,15 +85,15 @@ curl -L https://aka.ms/linux-arm64-deb > code_arm64.deb
|
|||
sudo apt install ./code_arm64.deb
|
||||
```
|
||||
VS Code should automatically appear in the Chromebook's Launcher, or you can use it to open a file directly with `code [filename]`:
|
||||
![Screenshot 2020-10-27 at 17.01.30.png](/assets/images/posts-2020/XtmaR9Z0J.png)
|
||||
![Screenshot 2020-10-27 at 17.01.30.png](/images/posts-2020/XtmaR9Z0J.png)
|
||||
Nice!
|
||||
|
||||
### Android platform tools (adb and fastboot)
|
||||
I sometimes don't want to wait for my Pixel to get updated naturally, so I love using `adb sideload` to manually update my phones. Here's what it takes to set that up. Installing adb is as simple as `sudo apt install adb`. To use it, enable the USB Debugging Developer Option on your phone, and then connect the phone to the Chromebook. You'll get a prompt to connect the phone to Linux:
|
||||
![Screenshot 2020-10-27 at 18.02.17.png](/assets/images/posts-2020/MkGu29HKl.png)
|
||||
![Screenshot 2020-10-27 at 18.02.17.png](/images/posts-2020/MkGu29HKl.png)
|
||||
|
||||
Once you connect the phone to Linux, check the phone to approve the debugging connection. You can then issue `adb devices` to verify the phone is connected:
|
||||
![Screenshot 2020-10-27 at 18.06.49.png](/assets/images/posts-2020/a0uqHkJiC.png)
|
||||
![Screenshot 2020-10-27 at 18.06.49.png](/images/posts-2020/a0uqHkJiC.png)
|
||||
|
||||
*I've since realized that the platform-tools (adb/fastboot) available in the repos are much older than what are required for flashing a factory image or sideloading an OTA image to a modern Pixel phone. This'll do fine for installing APKs either to your Chromebook or your phone, but I had to pull out my trusty Pixelbook to flash GrapheneOS to my Pixel 4a.*
|
||||
|
||||
|
@ -110,11 +110,11 @@ sudo chmod +x /opt/microsoft/powershell/7/pwsh
|
|||
sudo ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh
|
||||
```
|
||||
You can then just run `pwsh`:
|
||||
![Screenshot 2020-10-27 at 17.28.44.png](/assets/images/posts-2020/QRP4iyLnu.png)
|
||||
![Screenshot 2020-10-27 at 17.28.44.png](/images/posts-2020/QRP4iyLnu.png)
|
||||
That was the hard part. To install PowerCLI into your new Powershell environment, just run `Install-Module -Name VMware.PowerCLI` at the `PS >` prompt, and accept the warning about installing a module from an untrusted repository.
|
||||
|
||||
I'm planning to use PowerCLI against my homelab without trusted SSL certificates so (note to self) I need to run `Set-PowerCLIConfiguration -InvalidCertificateAction Ignore` before I try to connect.
|
||||
![Screenshot 2020-10-27 at 17.34.39.png](/assets/images/posts-2020/YaFNJJG_c.png)
|
||||
![Screenshot 2020-10-27 at 17.34.39.png](/images/posts-2020/YaFNJJG_c.png)
|
||||
|
||||
Woot!
|
||||
|
||||
|
@ -146,12 +146,12 @@ And finally update the package cache and install `docker` and its friends:
|
|||
sudo apt update
|
||||
sudo apt install docker-ce docker-ce-cli containerd.io
|
||||
```
|
||||
![Screenshot 2020-10-27 at 18.48.34.png](/assets/images/posts-2020/k2uiYi5e8.png)
|
||||
![Screenshot 2020-10-27 at 18.48.34.png](/images/posts-2020/k2uiYi5e8.png)
|
||||
Xzibit would be proud!
|
||||
|
||||
### 3D printing utilities
|
||||
Just like last time, I'll want to be sure I can do light 3D part design and slicing on this Chromebook. Once again, I can install FreeCAD with `sudo apt install freecad`, and this time I didn't have to implement any workarounds for graphical issues:
|
||||
![Screenshot 2020-10-27 at 19.16.31.png](/assets/images/posts-2020/q1inyuUOb.png)
|
||||
![Screenshot 2020-10-27 at 19.16.31.png](/images/posts-2020/q1inyuUOb.png)
|
||||
|
||||
Unfortunately, though, I haven't found a slicer application compiled with support for aarch64/arm64. There's a *much* older version of Cura available in the default Debian repos but it crashes upon launch. Neither Cura nor PrusaSlicer (or the Slic3r upstream) offer arm64 releases.
|
||||
|
||||
|
@ -173,7 +173,7 @@ conda install -c conda-forge notebook
|
|||
|
||||
You can then launch the notebook with `jupyter notebook` and it will automatically open up in a Chrome OS browser tab:
|
||||
|
||||
![Screenshot 2020-11-03 at 14.34.09.png](/assets/images/posts-2020/U5E556eXf.png)
|
||||
![Screenshot 2020-11-03 at 14.34.09.png](/images/posts-2020/U5E556eXf.png)
|
||||
|
||||
Cool! Now I just need to learn what I'm doing with Jupyter - but at least I don't have an excuse about "my laptop won't run it".
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ title: 'Showdown: Lenovo Chromebook Duet vs. Google Pixel Slate'
|
|||
|
||||
Okay, okay, this isn't actually going to be a comparison review between the two wildly-mismatched-but-also-kind-of-similar [Chromeblets](https://www.reddit.com/r/chromeos/comments/bp1nwo/branding/), but rather a (hopefully) brief summary of my experience moving from an $800 Pixel Slate + $200 Google keyboard to a Lenovo Chromebook Duet I picked up on sale for just $200.
|
||||
|
||||
![PXL_20201104_160532096.MP.jpg](/assets/images/posts-2020/P-x5qEg_9.jpeg)
|
||||
![PXL_20201104_160532096.MP.jpg](/images/posts-2020/P-x5qEg_9.jpeg)
|
||||
|
||||
### Background
|
||||
Up until last week, I'd been using the Slate as my primary personal computing device for the previous 20 months or so, mainly in laptop mode (as opposed to tablet mode). I do a lot of casual web browsing, and I spend a significant portion of my free time helping other users on Google's product support forums as a part of the [Google Product Experts program](https://productexperts.withgoogle.com/what-it-is). I also work a lot with the [Chrome OS Linux (Beta) environment](setting-up-linux-on-a-new-lenovo-chromebook-duet-bonus-arm64-complications), but I avoid Android apps as much as I can. And I also used the Slate for a bit of Stadia gaming when I wasn't near a Chromecast.
|
||||
|
@ -21,32 +21,32 @@ I was pretty happy with the Slate, but its expensive keyboard stopped working re
|
|||
|
||||
### Size
|
||||
When you put these machines side by side, the first difference that jumps out is the size disparity. The 12.3" Pixel Slate is positively massive next to the 10.1" Lenovo Duet.
|
||||
![PXL_20201104_160825979.MP (1).jpg](/assets/images/posts-2020/gVj7d_2Nu.jpeg)
|
||||
![PXL_20201104_160825979.MP (1).jpg](/images/posts-2020/gVj7d_2Nu.jpeg)
|
||||
|
||||
The Duet is physically smaller so the display itself is of course smaller. I had a brief moment of panic when I first logged in and the setup wizard completely filled the screen. Dialing Chrome OS's display scaling down to 80% strikes a good balance for me between fonts being legible while still displaying enough content to be worthwhile. It can get a bit tight when you've got windows docked side-by-side but I'm getting by okay.
|
||||
|
||||
Of course, the smaller size of the Duet also makes it work better as a tablet in my mind. It's comfortable enough to hold with one hand while you interact with the other, whereas the Slate always felt a little too big for that to me.
|
||||
![PXL_20201104_213309828.MP.jpg](/assets/images/posts-2020/qne9SybLi.jpeg)
|
||||
![PXL_20201104_213309828.MP.jpg](/images/posts-2020/qne9SybLi.jpeg)
|
||||
|
||||
### Keyboard
|
||||
A far more impactful size difference is the keyboards though. The Duet keyboard gets a bit cramped, particularly over toward the right side (you know, those pesky braces and semicolons that are *never* needed when coding):
|
||||
![PXL_20201104_160747877.MP.jpg](/assets/images/posts-2020/CBziPHD8A.jpeg)
|
||||
![PXL_20201104_160747877.MP.jpg](/images/posts-2020/CBziPHD8A.jpeg)
|
||||
|
||||
Getting used to typing on this significantly smaller keyboard has been the biggest adjustment so far. The pad on my pinky finger is wider than the last few keys at the right edge of the keyboard so I've struggled with accurately hitting the correct `[` or `]`, and also with smacking Return (and inevitably sending a malformed chat message) when trying to insert an apostrophe. I feel like I'm slowly getting the hang of it, but like I said, it's been an adjustment.
|
||||
|
||||
### Cover
|
||||
![PXL_20201104_160703333._exported_1604610747029.jpg](/assets/images/posts-2020/yiCW6XZbF.jpeg)
|
||||
![PXL_20201104_160703333._exported_1604610747029.jpg](/images/posts-2020/yiCW6XZbF.jpeg)
|
||||
The Pixel Slate's keyboard + folio cover is a single (floppy) piece. The keyboard connects to contacts on the bottom edge of the Slate, and magnets hold it in place. The rear cover then folds and sticks to the back of the Slate with magnets to prop up the tablet in different angles. The magnet setup means you can smoothly transition it through varying levels of tilt, which is pretty nice. But being a single piece means the keyboard might get in the way if you're trying to use it as just a propped-up tablet. And the extra folding in the back takes up a bit of space so the Slate may not work well as a laptop on your actual lap.
|
||||
|
||||
![PXL_20201104_160949342.MP.jpg](/assets/images/posts-2020/9_Ze3zyBk.jpeg)
|
||||
![PXL_20201104_160949342.MP.jpg](/images/posts-2020/9_Ze3zyBk.jpeg)
|
||||
|
||||
The Duet's rear cover has a fabric finish kind of similar to the cases Google offers for their phones, and it provides a great texture for holding the tablet. It sticks to the back of the Duet through the magic of magnets, and the lower half of it folds out to create a really sturdy kickstand. And it's completely separate from the keyboard which is great for when you're using the Duet as a tablet (either handheld or propped up for watching a movie or gaming with Stadia).
|
||||
|
||||
![PXL_20201104_161022969.MP.jpg](/assets/images/posts-2020/nWRu2TB8i.jpeg)
|
||||
![PXL_20201104_161022969.MP.jpg](/images/posts-2020/nWRu2TB8i.jpeg)
|
||||
|
||||
And this little kickstand can go *low*, much lower than the Slate. This makes it perfect for my late-night Stadia sessions while sitting in bed. I definitely prefer this approach compared to what Google did with the Pixel Slate.
|
||||
|
||||
![PXL_20201104_161057794.MP.jpg](/assets/images/posts-2020/BAf7knBk5.jpeg)
|
||||
![PXL_20201104_161057794.MP.jpg](/images/posts-2020/BAf7knBk5.jpeg)
|
||||
|
||||
### Performance
|
||||
The Duet does struggle a bit here. It's basically got a [smartphone processor](https://www.notebookcheck.net/Mediatek-Helio-P60T-Processor-Benchmarks-and-Specs.470711.0.html) and half the RAM of the Slate. Switching between windows and tabs sometimes takes an extra moment or two to catch up (particularly if said tab has been silently suspended in the background). Similarly, working with Linux apps is just a bit slower than you'd like it to be. Still, I've spent a bit more than a week now with the Duet as my go-to computer and it's never really been slow enough to bother me.
|
||||
|
|
|
@ -14,7 +14,7 @@ title: Safeguard your Android's battery with Tasker + Home Assistant
|
|||
|
||||
A few months ago, I started using the [Accubattery app](https://play.google.com/store/apps/details?id=com.digibites.accubattery) to keep a closer eye on how I'd been charging my phones. The app has a handy feature that notifies you once the battery level reaches a certain threshold so you can pull the phone off the charger and extend the lithium battery's service life, and it even offers an estimate for what that impact might be. For instance, right now the app indicates that charging my Pixel 5 from 51% to 100% would cause 0.92 wear cycles, while stopping the charge at 80% would impose just 0.17 cycles.
|
||||
|
||||
![Screenshot_20201114-135308.png](/assets/images/posts-2020/aeIOr8w6k.png)
|
||||
![Screenshot_20201114-135308.png](/images/posts-2020/aeIOr8w6k.png)
|
||||
|
||||
But that depends on me being near my phone and conscious so I can take action when the notification goes off. That's often a big assumption to make - and, frankly, I'm lazy.
|
||||
|
||||
|
@ -30,17 +30,17 @@ I'm not going to go through how to install Home Assistant on the Pi or how to co
|
|||
|
||||
### The Recipe
|
||||
1. Plug the Wemo into a wall outlet, and plug a phone charger into the Wemo. Add the Belkin Wemo integration in Home Assistant, and configure the device and entity. I named mine `switchy`. Make a note of the Entity ID: `switch.switchy`. We'll need that later.
|
||||
![Screenshot 2020-11-14 at 15.28.53.png](/assets/images/posts-2020/Gu5I3LUep.png)
|
||||
![Screenshot 2020-11-14 at 15.28.53.png](/images/posts-2020/Gu5I3LUep.png)
|
||||
2. Either point your phone's browser to your [Home Assistant instance's local URL](http://homeassistant.local:8123/), or use the [Home Assistant app](https://play.google.com/store/apps/details?id=io.homeassistant.companion.android) to access it. Tap your username at the bottom of the menu and scroll all the way down to the Long-Lived Access Tokens section. Tap to create a new token. It doesn't matter what you name it, but be sure to copy to token data once it is generated since you won't be able to display it again.
|
||||
3. Install the [Home Assistant Plug-In for Tasker](https://play.google.com/store/apps/details?id=com.markadamson.taskerplugin.homeassistant). Open Tasker, create a new Task called 'ChargeOff', and set the action to `Plugin > Home Assistant Plug-in for Tasker > Call Service`. Tap the pencil icon to edit the configuration, and then tap the plus sign to add a new server. Give it whatever name you like, and then enter your Home Assistant's IP address for the Base URL, followed by the port number `8123`. For example, `http://192.168.1.99:8123`. Paste in the Long-Lived Access Token you generated earlier. Go on and hit the Test Server button to make sure you got it right. It'll wind up looking something like this:
|
||||
![Screenshot_20201114-160839.png](/assets/images/posts-2020/8Jg4zgrgB.png)
|
||||
![Screenshot_20201114-160839.png](/images/posts-2020/8Jg4zgrgB.png)
|
||||
For the Service field, you need to tell HA what you want it to do. We want it to turn off a switch so enter `switch.turn_off`. We'll use the Service Data field to tell it which switch, in JSON format:
|
||||
```json
|
||||
{"entity_id": "switch.switchy"}
|
||||
```
|
||||
Tap Test Service to make sure it works - and verify that the switch does indeed turn off.
|
||||
![Screenshot_20201114-164514.png](/assets/images/posts-2020/U3LfmEJ_7.png)
|
||||
![Screenshot_20201114-164514.png](/images/posts-2020/U3LfmEJ_7.png)
|
||||
4. Hard part is over. Now we just need to set up a profile in Tasker to fire our new task. I named mine 'Charge Limiter'. I started with `State > Power > Battery Level` and set it to trigger between 81-100%., and also added `State > Power > Source: Any` so it will only be active while charging. I also only want this to trigger while my phone is charging at home, so I added `State > Net > Wifi Connected` and then specified my home SSID. Link this profile to the Task you created earlier, and never worry about overcharging your phone again.
|
||||
![Screenshot_20201114-172454.png](/assets/images/posts-2020/h7tl6facr.png)
|
||||
![Screenshot_20201114-172454.png](/images/posts-2020/h7tl6facr.png)
|
||||
|
||||
You can use a similar Task to turn the switch back on at a set time - or you could configure that automation directly in Home Assistant. I added an action to turn on the switch to my Google Assistant bedtime routine and that works quite well for my needs.
|
||||
|
|
|
@ -16,7 +16,7 @@ title: Auto-connect to ProtonVPN on untrusted WiFi with Tasker [Update!]
|
|||
|
||||
I recently shared how I use [Tasker and Home Assistant to keep my phone from charging past 80%](safeguard-your-androids-battery-with-tasker-home-assistant). Today, I'm going to share the setup I use to automatically connect my phone to a VPN on networks I *don't* control.
|
||||
|
||||
![Tasker + OpenVPN](/assets/images/posts-2020/Ki7jo65t3.png)
|
||||
![Tasker + OpenVPN](/images/posts-2020/Ki7jo65t3.png)
|
||||
|
||||
### Background
|
||||
Android has an option to [set a VPN as Always-On](https://support.google.com/android/answer/9089766#always-on_VPN) so for maximum security I could just use that. I'm not *overly* concerned (yet?) with my internet traffic being intercepted upstream of my ISP, though, and often need to connect to other devices on my home network without passing through a VPN (or introducing split-tunnel complexity). But I do want to be sure that my traffic is protected whenever I'm connected to a WiFi network controlled by someone else.
|
||||
|
@ -45,7 +45,7 @@ You can find instructions for configuring the OpenVPN client to work with Proton
|
|||
- **Country configs** connect to a random VPN node in your target country
|
||||
- **Standard server configs** let you choose the specific VPN node to use
|
||||
- **Free server configs** connect you to one of the VPN nodes available in the free tier
|
||||
![Client config download page](/assets/images/posts-2020/vdIG0jHmk.png)
|
||||
![Client config download page](/images/posts-2020/vdIG0jHmk.png)
|
||||
|
||||
Feel free to download more than one if you'd like to have different profiles available within the OpenVPN app.
|
||||
|
||||
|
@ -56,7 +56,7 @@ ProtonVPN automatically generates a set of user credentials to use with a third-
|
|||
### Configuring OpenVPN for Android
|
||||
Now what you've got the config file(s) and your client credentials, it's time to actually configure that client.
|
||||
|
||||
![OpenVPN connection list](/assets/images/posts-2020/9WdA6HRch.png)
|
||||
![OpenVPN connection list](/images/posts-2020/9WdA6HRch.png)
|
||||
|
||||
1. Launch the OpenVPN for Android app and tap the little 'downvote-in-a-box' "Import" icon.
|
||||
2. Browse to wherever you saved the `.ovpn` config files and select the one you'd like to use.
|
||||
|
@ -69,7 +69,7 @@ Success!
|
|||
|
||||
I don't like to have a bunch of persistent notification icons hanging around (and Android already shows a persistent status icon when a VPN connection is active). If you're like me, long-press the OpenVPN notification and tap the gear icon. Then tap on the **Connection statistics** category and activate the **Minimized** slider. The notification will still appear, but it will collapse to the bottom of your notification stack and you won't get bugged by the icon.
|
||||
|
||||
![Notification settings](/assets/images/posts-2020/WWuHwVvrk.png)
|
||||
![Notification settings](/images/posts-2020/WWuHwVvrk.png)
|
||||
|
||||
### Tasker profiles
|
||||
Open up Tasker and get ready to automate! We're going to wind up with at least two new Tasker profiles so (depending on how many you already have) you might want to create a new project by long-pressing the Home icon at the bottom-left of the screen and selecting the **Add** option. I chose to group all my VPN-related profiles in a project named (oh-so-creatively) "VPN". Totally your call though.
|
||||
|
@ -146,7 +146,7 @@ A1: Variable Clear [ Name:%TRUSTED_WIFI Pattern Matching:Off Local Variables Onl
|
|||
#### OpenVPN Connect app configuration
|
||||
After installing and launching the official [OpenVPN Connect app](https://play.google.com/store/apps/details?id=net.openvpn.openvpn), tap the "+" button at the bottom right to create a new profile. Swipe over to the "File" tab and import the `*.ovpn` file you downloaded from ProtonVPN. Paste in the username, tick the "Save password" box, and paste in the password as well. I also chose to rename the profile to something a little bit more memorable - you'll need this name later. From there, hit the "Add" button and then go ahead and tap on your profile to test the connection.
|
||||
|
||||
![Creating a profile in OpenVPN Connect](/assets/images/posts-2020/KjGOX8Yiv.png)
|
||||
![Creating a profile in OpenVPN Connect](/images/posts-2020/KjGOX8Yiv.png)
|
||||
|
||||
#### Tasker profiles
|
||||
Go ahead and create the [Trusted Wifi profile](#trusted-wifi) as described above.
|
||||
|
|
|
@ -17,17 +17,17 @@ Normally that tool is used to creating bootable media to [reinstall Chrome OS on
|
|||
1. Install the [Chromebook Recovery Utility](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm).
|
||||
2. Download the ISO you intend to use.
|
||||
3. Rename the file to append `.bin` on the end, after the `.iso` bit:
|
||||
![Screenshot 2020-12-23 at 15.42.40.png](/assets/images/posts-2020/uoTjgtbN1.png)
|
||||
![Screenshot 2020-12-23 at 15.42.40.png](/images/posts-2020/uoTjgtbN1.png)
|
||||
4. Plug in the USB drive you're going to sacrifice for this effort - remember that ALL data on the drive will be erased.
|
||||
5. Open the recovery utility, click on the gear icon at the top right, and select the *Use local image* option:
|
||||
![Screenshot 2020-12-23 at 15.44.04.png](/assets/images/posts-2020/vdTpW9t7Q.png)
|
||||
![Screenshot 2020-12-23 at 15.44.04.png](/images/posts-2020/vdTpW9t7Q.png)
|
||||
6. Browse to and select the `*.iso.bin` file.
|
||||
7. Choose the USB drive, and click *Continue*.
|
||||
![Screenshot 2020-12-23 at 15.45.59.png](/assets/images/posts-2020/p_Ieqsw4p.png)
|
||||
![Screenshot 2020-12-23 at 15.45.59.png](/images/posts-2020/p_Ieqsw4p.png)
|
||||
8. Click *Create now* to start the writing!
|
||||
![Screenshot 2020-12-23 at 15.53.03.png](/assets/images/posts-2020/lhw5EEqSD.png)
|
||||
![Screenshot 2020-12-23 at 15.53.03.png](/images/posts-2020/lhw5EEqSD.png)
|
||||
9. All done! It probably won't work great for actually recovering your Chromebook but will do wonders for installing ESXi (or whatever) on another computer!
|
||||
![Screenshot 2020-12-23 at 15.53.32.png](/assets/images/posts-2020/-lp1-DGiM.png)
|
||||
![Screenshot 2020-12-23 at 15.53.32.png](/images/posts-2020/-lp1-DGiM.png)
|
||||
|
||||
You can also use the CRU to make a bootable USB from a `.zip` archive containing a single `.img` file, such as those commonly used to distribute [Raspberry Pi images](https://www.raspberrypi.org/documentation/installation/installing-images/chromeos.md).
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ It's a good idea to take a snapshot of your virtual appliances before applying a
|
|||
|
||||
*(Yes, that's a lesson I learned the hard way - and warnings about that are tragically hard to come by from what I've seen. So I'm sharing my notes so that you can avoid making the same mistake.)*
|
||||
|
||||
![Screenshot 2021-01-30 16.09.02.png](/assets/images/posts-2020/XTaU9VDy8.png)
|
||||
![Screenshot 2021-01-30 16.09.02.png](/images/posts-2020/XTaU9VDy8.png)
|
||||
|
||||
Take these steps when you need to snapshot linked vCenters to avoid breaking replication:
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ title: VMware Home Lab on Intel NUC 9
|
|||
|
||||
I picked up an Intel NUC 9 Extreme kit a few months back (thanks, VMware!) and have been slowly tinkering with turning it into an extremely capable self-contained home lab environment. I'm pretty happy with where things sit right now so figured it was about time to start documenting and sharing what I've done.
|
||||
|
||||
![Screenshot 2020-12-23 at 12.30.07.png](/assets/images/posts-2020/SIDah-Lag.png)
|
||||
![Screenshot 2020-12-23 at 12.30.07.png](/images/posts-2020/SIDah-Lag.png)
|
||||
|
||||
### Hardware
|
||||
*(Caution: here be affiliate links)*
|
||||
|
@ -41,24 +41,24 @@ The NUC connects to my home network through its onboard gigabit Ethernet interfa
|
|||
I used the Chromebook Recovery Utility to write the ESXi installer ISO to *another* USB drive (how-to [here](burn-an-iso-to-usb-with-the-chromebook-recovery-utility)), inserted that bootable drive to a port on the front of the NUC, and booted the NUC from the drive. Installing ESXi 7.0u1 was as easy as it could possibly be. All hardware was automatically detected and the appropriate drivers loaded. Once the host booted up, I used the DCUI to configure a static IP address (`192.168.1.11`). I then shut down the NUC, disconnected the keyboard and monitor, and moved it into the cabinet where it will live out its headless existence.
|
||||
|
||||
I was then able to point my web browser to `https://192.168.1.11/ui/` to log in to the host and get down to business. First stop: networking. For now, I only need a single standard switch (`vSwitch0`) with two portgroups: one for the host's vmkernel interface, and the other for the VMs (including the nested ESXi appliances) that are going to run directly on this physical host. The one "gotcha" when working with a nested environment is that you'll need to edit the virtual switch's security settings to "Allow promiscuous mode" and "Allow forged transmits" (for reasons described [here](https://williamlam.com/2013/11/why-is-promiscuous-mode-forged.html)).
|
||||
![ink (2).png](/assets/images/posts-2020/w0HeFSi7Q.png)
|
||||
![ink (2).png](/images/posts-2020/w0HeFSi7Q.png)
|
||||
|
||||
I created a single datastore to span the entirety of that 1TB NVMe drive. The nested ESXi hosts will use VMDKs stored here to provide storage to the nested VMs.
|
||||
![Screenshot 2020-12-28 at 12.24.57.png](/assets/images/posts-2020/XDe98S4Fx.png)
|
||||
![Screenshot 2020-12-28 at 12.24.57.png](/images/posts-2020/XDe98S4Fx.png)
|
||||
|
||||
#### Domain Controller
|
||||
I created a new Windows VM with 2 vCPUs, 4GB of RAM, and a 90GB virtual hard drive, and I booted it off a [Server 2019 evaluation ISO](https://www.microsoft.com/en-US/evalcenter/evaluate-windows-server-2019?filetype=ISO). I gave it a name, a static IP address, and proceeded to install and configure the Active Directory Domain Services and DNS Server roles. I created static A and PTR records for the vCenter Server Appliance I'd be deploying next (`vcsa.`) and the physical host (`nuchost.`). I configured ESXi to use this new server for DNS resolutions, and confirmed that I could resolve the VCSA's name from the host.
|
||||
|
||||
![Screenshot 2020-12-30 at 13.10.58.png](/assets/images/posts-2020/4o5bqRiTJ.png)
|
||||
![Screenshot 2020-12-30 at 13.10.58.png](/images/posts-2020/4o5bqRiTJ.png)
|
||||
|
||||
Before moving on, I installed the Chrome browser on this new Windows VM and also set up remote access via [Chrome Remote Desktop](https://remotedesktop.google.com/access/). This will let me remotely access and manage my lab environment without having to punch holes in the router firewall (or worry about securing said holes). And it's got "chrome" in the name so it will work just fine from my Chromebooks!
|
||||
|
||||
#### vCenter
|
||||
I attached the vCSA installation ISO to the Windows VM and performed the vCenter deployment from there. (See, I told you that Chrome Remote Desktop would come in handy!)
|
||||
![Screenshot 2020-12-30 at 14.51.09.png](/assets/images/posts-2020/OOP_lstyM.png)
|
||||
![Screenshot 2020-12-30 at 14.51.09.png](/images/posts-2020/OOP_lstyM.png)
|
||||
|
||||
After the vCenter was deployed and the basic configuration completed, I created a new cluster to contain the physical host. There's likely only ever going to be the one physical host but I like being able to logically group hosts in this way, particularly when working with PowerCLI. I then added the host to the vCenter by its shiny new FQDN.
|
||||
![Screenshot 2021-01-05 10.39.54.png](/assets/images/posts-2020/Wu3ZIIVTs.png)
|
||||
![Screenshot 2021-01-05 10.39.54.png](/images/posts-2020/Wu3ZIIVTs.png)
|
||||
|
||||
I've now got a fully-functioning VMware lab, complete with a physical hypervisor to run the workloads, a vCenter server to manage the workloads, and a Windows DNS server to tell the workloads how to talk to each other. Since the goal is to ultimately simulate a (small) production environment, let's set up some additional networking before we add anything else.
|
||||
|
||||
|
@ -86,7 +86,7 @@ Of course, not everything that I'm going to deploy in the lab will need to be ac
|
|||
#### vSwitch1
|
||||
I'll start by adding a second vSwitch to the physical host. It doesn't need a physical adapter assigned since this switch will be for internal traffic. I create two port groups: one tagged for the VLAN 1610 Management traffic, which will be useful for attaching VMs on the physical host to the internal network; and the second will use VLAN 4095 to pass all VLAN traffic to the nested ESXi hosts. And again, this vSwitch needs to have its security policy set to allow Promiscuous Mode and Forged Transmits. I also set the vSwitch to support an MTU of 9000 so I can use Jumbo Frames on the vMotion and vSAN networks.
|
||||
|
||||
![Screenshot 2021-01-05 16.37.57.png](/assets/images/posts-2020/7aNJa2Hlm.png)
|
||||
![Screenshot 2021-01-05 16.37.57.png](/images/posts-2020/7aNJa2Hlm.png)
|
||||
|
||||
#### VyOS
|
||||
Wouldn't it be great if the VMs that are going to be deployed on those `1610`, `1620`, and `1630` VLANs could still have their traffic routed out of the internal networks? But doing routing requires a router (or so my network friends tell me)... so I deployed a VM running the open-source VyOS router platform. I used [William Lam's instructions for installing VyOS](https://williamlam.com/2020/02/how-to-automate-the-creation-multiple-routable-vlans-on-single-l2-network-using-vyos.html), making sure to attach the first network interface to the Home-Network portgroup and the second to the Isolated portgroup (VLAN 4095). I then set to work [configuring the router](https://docs.vyos.io/en/latest/quick-start.html).
|
||||
|
@ -190,24 +190,24 @@ Alright, it's time to start building up the nested environment. To start, I grab
|
|||
|
||||
Deploying the virtual appliances is just like any other "Deploy OVF Template" action. I placed the VMs on the `physical-cluster` compute resource, and selected to thin provision the VMDKs on the local datastore. I chose the "Isolated" VM network which uses VLAN 4095 to make all the internal VLANs available on a single portgroup.
|
||||
|
||||
![Screenshot 2021-01-07 10.54.50.png](/assets/images/posts-2020/zOJp-jqVb.png)
|
||||
![Screenshot 2021-01-07 10.54.50.png](/images/posts-2020/zOJp-jqVb.png)
|
||||
|
||||
And I set the networking properties accordingly:
|
||||
|
||||
![Screenshot 2021-01-07 11.09.36.png](/assets/images/posts-2020/PZ6FzmJcx.png)
|
||||
![Screenshot 2021-01-07 11.09.36.png](/images/posts-2020/PZ6FzmJcx.png)
|
||||
|
||||
These virtual appliances come with 3 hard drives. The first will be used as the boot device, the second for vSAN caching, and the third for vSAN capacity. I doubled the size of the second and third drives, to 8GB and 16GB respectively:
|
||||
|
||||
![Screenshot 2021-01-07 13.01.19.png](/assets/images/posts-2020/nkdH7Jfxw.png)
|
||||
![Screenshot 2021-01-07 13.01.19.png](/images/posts-2020/nkdH7Jfxw.png)
|
||||
|
||||
After booting the new host VMs, I created a new cluster in vCenter and then added the nested hosts:
|
||||
![Screenshot 2021-01-07 13.28.03.png](/assets/images/posts-2020/z8fvzu4Km.png)
|
||||
![Screenshot 2021-01-07 13.28.03.png](/images/posts-2020/z8fvzu4Km.png)
|
||||
|
||||
Next, I created a new Distributed Virtual Switch to break out the VLAN trunk on the nested host "physical" adapters into the individual VLANs I created on the VyOS router. Again, each port group will need to allow Promiscuous Mode and Forged Transmits, and I set the dvSwitch MTU size to 9000 (to support Jumbo Frames on the vSAN and vMotion portgroups).
|
||||
![Screenshot 2021-01-08 10.04.24.png](/assets/images/posts-2020/arA7gurqh.png)
|
||||
![Screenshot 2021-01-08 10.04.24.png](/images/posts-2020/arA7gurqh.png)
|
||||
|
||||
I migrated the physical NICs and `vmk0` to the new dvSwitch, and then created new vmkernel interfaces for vMotion and vSAN traffic on each of the nested hosts:
|
||||
![Screenshot 2021-01-19 10.03.27.png](/assets/images/posts-2020/6-auEYd-W.png)
|
||||
![Screenshot 2021-01-19 10.03.27.png](/images/posts-2020/6-auEYd-W.png)
|
||||
|
||||
I then ssh'd into the hosts and used `vmkping` to make sure they could talk to each other over these interfaces. I changed the vMotion interface to use the vMotion TCP/IP stack so needed to append the `-S vmotion` flag to the command:
|
||||
|
||||
|
@ -234,10 +234,10 @@ round-trip min/avg/max = 0.202/0.252/0.312 ms
|
|||
```
|
||||
|
||||
Okay, time to throw some vSAN on these hosts. Select the cluster object, go to the configuration tab, scroll down to vSAN, and click "Turn on vSAN". This will be a single site cluster, and I don't need to enable any additional services. When prompted, I claim the 8GB drives for the cache tier and the 16GB drives for capacity.
|
||||
![Screenshot 2021-01-23 17.35.34.png](/assets/images/posts-2020/mw-rsq_1a.png)
|
||||
![Screenshot 2021-01-23 17.35.34.png](/images/posts-2020/mw-rsq_1a.png)
|
||||
|
||||
It'll take a few minutes for vSAN to get configured on the cluster.
|
||||
![Screenshot 2021-01-23 17.41.13.png](/assets/images/posts-2020/mye0LdtNj.png)
|
||||
![Screenshot 2021-01-23 17.41.13.png](/images/posts-2020/mye0LdtNj.png)
|
||||
|
||||
Huzzah! Next stop:
|
||||
|
||||
|
@ -253,7 +253,7 @@ Anyhoo, each of these VMs will need to be resolvable in DNS so I started by crea
|
|||
|`vra.lab.bowdre.net`|`192.168.1.42`|
|
||||
|
||||
I then attached the installer ISO to my Windows VM and ran through the installation from there.
|
||||
![Screenshot 2021-02-05 16.28.41.png](/assets/images/posts-2020/42n3aMim5.png)
|
||||
![Screenshot 2021-02-05 16.28.41.png](/images/posts-2020/42n3aMim5.png)
|
||||
|
||||
Similar to the vCenter deployment process, this one prompts you for all the information it needs up front and then takes care of everything from there. That's great news because this is a pretty long deployment; it took probably two hours from clicking the final "Okay, do it" button to being able to log in to my shiny new vRealize Automation environment.
|
||||
|
||||
|
|
|
@ -12,21 +12,21 @@ toc: false
|
|||
---
|
||||
|
||||
I recently ran into a peculiar issue after upgrading my vRealize Automation homelab to the new 8.3 release, and the error message displayed in the UI didn't give me a whole lot of information to work with:
|
||||
![Screenshot 2021-02-18 10.27.41.png](/assets/images/posts-2020/IL29_Shlg.png)
|
||||
![Screenshot 2021-02-18 10.27.41.png](/images/posts-2020/IL29_Shlg.png)
|
||||
|
||||
I connected to the vRA appliance to try to find the relevant log excerpt, but [doing so isn't all that straightforward](https://www.stevenbright.com/2020/01/vmware-vrealize-automation-8-0-logs/#:~:text=Access%20Logs%20from%20the%20CLI) given the containerized nature of the services.
|
||||
So instead I used the `vracli log-bundle` command to generate a bundle of all relevant logs, and I then transferred the resulting (2.2GB!) `log-bundle.tar` to my workstation for further investigation. I expanded the tar and ran `tree -P '*.log'` to get a quick idea of what I've got to deal with:
|
||||
![Screenshot 2021-02-18 11.01.56.png](/assets/images/posts-2020/wAa9KjBHO.png)
|
||||
![Screenshot 2021-02-18 11.01.56.png](/images/posts-2020/wAa9KjBHO.png)
|
||||
Ugh. Even if I knew which logs I wanted to look at (and I don't) it would take ages to dig through all of this. There's got to be a better way.
|
||||
|
||||
And there is! Visual Studio Code lets you open an entire directory tree in the editor:
|
||||
![Screenshot 2021-02-18 12.19.17.png](/assets/images/posts-2020/SBKtJ8K1p.png)
|
||||
![Screenshot 2021-02-18 12.19.17.png](/images/posts-2020/SBKtJ8K1p.png)
|
||||
|
||||
You can then "Find in Files" with `Ctrl`+`Shift`+`F`, and VS Code will *very* quickly search through all the files to find what you're looking for:
|
||||
![Screenshot 2021-02-18 12.25.01.png](/assets/images/posts-2020/PPZu_UOGO.png)
|
||||
![Screenshot 2021-02-18 12.25.01.png](/images/posts-2020/PPZu_UOGO.png)
|
||||
|
||||
You can also click the "Open in editor" link at the top of the search results to open the matching snippets in a single view:
|
||||
![Screenshot 2021-02-18 12.31.46.png](/assets/images/posts-2020/kJ_l7gPD2.png)
|
||||
![Screenshot 2021-02-18 12.31.46.png](/images/posts-2020/kJ_l7gPD2.png)
|
||||
|
||||
Adjusting the number at the far top right of that view will dynamically tweak how many context lines are included with each line containing the search term.
|
||||
|
||||
|
|
|
@ -101,34 +101,34 @@ Now would also be a good time to go ahead and enable cron jobs so that phpIPAM w
|
|||
Okay, let's now move on to the phpIPAM web-based UI to continue the setup. After logging in at `https://ipam.lab.bowdre.net/`, I clicked on the red **Administration** menu at the right side and selected **phpIPAM Settings**. Under the **Site Settings** section, I enabled the *Prettify links* option, and under the **Feature Settings** section I toggled on the *API* component. I then hit *Save* at the bottom of the page to apply the changes.
|
||||
|
||||
Next, I went to the **Users** item on the left-hand menu to create a new user account which will be used by vRA. I named it `vra`, set a password for the account, and made it a member of the `Operators` group, but didn't grant any special module access.
|
||||
![Screenshot 2021-02-20 14.18.47.png](/assets/images/posts-2020/DiqyOlf5S.png)
|
||||
![Screenshot 2021-02-20 14.20.49.png](/assets/images/posts-2020/QoxVKC11t.png)
|
||||
![Screenshot 2021-02-20 14.18.47.png](/images/posts-2020/DiqyOlf5S.png)
|
||||
![Screenshot 2021-02-20 14.20.49.png](/images/posts-2020/QoxVKC11t.png)
|
||||
|
||||
The last step in configuring API access is to create an API key. This is done by clicking the **API** item on that left side menu and then selecting *Create API key*. I gave it the app ID `vra`, granted Read/Write permissions, and set the *App Security* option to "SSL with User token".
|
||||
![Screenshot 2021-02-20 14.23.50.png](/assets/images/posts-2020/-aPGJhSvz.png)
|
||||
![Screenshot 2021-02-20 14.23.50.png](/images/posts-2020/-aPGJhSvz.png)
|
||||
|
||||
Once we get things going, our API calls will authenticate with the username and password to get a token and bind that to the app ID.
|
||||
|
||||
### Step 2: Configuring phpIPAM subnets
|
||||
Our fancy new IPAM solution is ready to go - except for the whole bit about managing IPs. We need to tell it about the network segments we'd like it to manage. phpIPAM uses "Sections" to group subnets together, so we start by creating a new Section at **Administration > IP related management > Sections**. I named my new section `Lab`, and pretty much left all the default options. Be sure that the `Operators` group has read/write access to this section and the subnets we're going to create inside it!
|
||||
![Screenshot 2021-02-20 14.33.39.png](/assets/images/posts-2020/6yo39lXI7.png)
|
||||
![Screenshot 2021-02-20 14.33.39.png](/images/posts-2020/6yo39lXI7.png)
|
||||
|
||||
We should also go ahead and create a Nameserver set so that phpIPAM will be able to tell its clients (vRA) what server(s) to use for DNS. Do this at **Administration > IP related management > Nameservers**. I created a new entry called `Lab` and pointed it at my internal DNS server, `192.168.1.5`.
|
||||
![Screenshot 2021-02-20 14.40.57.png](/assets/images/posts-2020/pDsEh18bx.png)
|
||||
![Screenshot 2021-02-20 14.40.57.png](/images/posts-2020/pDsEh18bx.png)
|
||||
|
||||
Okay, we're finally ready to start entering our subnets at **Administration > IP related management > Subnets**. For each one, I entered the Subnet in CIDR format, gave it a useful description, and associated it with my `Lab` section. I expanded the *VLAN* dropdown and used the *Add new VLAN* option to enter the corresponding VLAN information, and also selected the Nameserver I had just created.
|
||||
![Screenshot 2021-02-20 14.44.20.png](/assets/images/posts-2020/-PHf9oUyM.png)
|
||||
![Screenshot 2021-02-20 14.44.20.png](/images/posts-2020/-PHf9oUyM.png)
|
||||
I also enabled the options *Mark as pool*, *Check hosts status*, *Discover new hosts*, and *Resolve DNS names*.
|
||||
![Screenshot 2021-02-20 15.03.13.png](/assets/images/posts-2020/SR7oD0jsG.png)
|
||||
![Screenshot 2021-02-20 15.03.13.png](/images/posts-2020/SR7oD0jsG.png)
|
||||
|
||||
I then used the *Scan subnets for new hosts* button to run a discovery scan against the new subnet.
|
||||
![Screenshot 2021-02-20 15.06.41.png](/assets/images/posts-2020/4WQ8HWJ2N.png)
|
||||
![Screenshot 2021-02-20 15.06.41.png](/images/posts-2020/4WQ8HWJ2N.png)
|
||||
|
||||
The scan only found a single host, `172.16.20.1`, which is the subnet's gateway address hosted by the Vyos router. I used the pencil icon to edit the IP and mark it as the gateway:
|
||||
![Screenshot 2021-02-20 15.08.43.png](/assets/images/posts-2020/2otDJvqRP.png)
|
||||
![Screenshot 2021-02-20 15.08.43.png](/images/posts-2020/2otDJvqRP.png)
|
||||
|
||||
phpIPAM now knows the network address, mask, gateway, VLAN, and DNS configuration for this subnet - all things that will be useful for clients seeking an address. I then repeated these steps for the remaining subnets.
|
||||
![Screenshot 2021-02-20 15.13.38.png](/assets/images/posts-2020/09RIXJc12.png)
|
||||
![Screenshot 2021-02-20 15.13.38.png](/images/posts-2020/09RIXJc12.png)
|
||||
|
||||
Now for the *real* fun!
|
||||
|
||||
|
@ -352,10 +352,10 @@ try:
|
|||
You can view the full code [here](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/python/validate_endpoint/source.py).
|
||||
|
||||
After completing each operation, run `mvn package -PcollectDependencies -Duser.id=${UID}` to build again, and then import the package to vRA again. This time, you'll see the new "API App ID" field on the form:
|
||||
![Screenshot 2021-02-21 16.30.33.png](/assets/images/posts-2020/bpx8iKUHF.png)
|
||||
![Screenshot 2021-02-21 16.30.33.png](/images/posts-2020/bpx8iKUHF.png)
|
||||
|
||||
Confirm that everything worked correctly by hopping over to the **Extensibility** tab, selecting **Action Runs** on the left, and changing the **User Runs** filter to say *Integration Runs*.
|
||||
![Screenshot 2021-02-21 19.18.43.png](/assets/images/posts-2020/e4PTJxfqH.png)
|
||||
![Screenshot 2021-02-21 19.18.43.png](/images/posts-2020/e4PTJxfqH.png)
|
||||
Select the newest `phpIPAM_ValidateEndpoint` action and make sure it has a happy green *Completed* status. You can also review the Inputs to make sure they look like what you expected:
|
||||
```json
|
||||
{
|
||||
|
@ -504,7 +504,7 @@ vRA runs the `phpIPAM_GetIPRanges` action about every ten minutes so keep checki
|
|||
Note that it *did not* pick up my "Home Network" range since it wasn't set to be a pool.
|
||||
|
||||
We can also navigate to **Infrastructure > Networks > IP Ranges** to view them in all their glory:
|
||||
![Screenshot 2021-02-21 17.49.12.png](/assets/images/posts-2020/7_QI-Ti8g.png)
|
||||
![Screenshot 2021-02-21 17.49.12.png](/images/posts-2020/7_QI-Ti8g.png)
|
||||
|
||||
You can then follow [these instructions](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-410899CA-1B02-4507-96AD-DFE622D2DD47.html) to associate the external IP ranges with networks available for vRA deployments.
|
||||
|
||||
|
@ -638,7 +638,7 @@ The full `allocate_ip` code is [here](https://github.com/jbowdre/phpIPAM-for-vRA
|
|||
[2021-02-22 01:31:41,790] [INFO] - Successfully reserved ['172.16.40.2'] for BOW-VLTST-XXX41.
|
||||
```
|
||||
You can also check for a reserved address in phpIPAM:
|
||||
![Screenshot 2021-02-21 19.32.38.png](/assets/images/posts-2020/3BQnEd0bY.png)
|
||||
![Screenshot 2021-02-21 19.32.38.png](/images/posts-2020/3BQnEd0bY.png)
|
||||
|
||||
Almost done!
|
||||
|
||||
|
|
|
@ -27,35 +27,35 @@ Looking back, that's kind of a lot. I can see why I've been working on this for
|
|||
|
||||
### vSphere setup
|
||||
In production, I'll want to be able to deploy to different computer clusters spanning multiple vCenters. That's a bit difficult to do on a single physical server, but I still wanted to be able to simulate that sort of dynamic resource selection. So for development and testing in my lab, I'll be using two sites - `BOW` and `DRE`. I ditched the complicated "just because I can" vSAN I'd built previously and instead spun up two single-host nested clusters, one for each of my sites:
|
||||
![vCenter showing the BOW and DRE clusters](/assets/images/posts-2020/KUCwEgEhN.png)
|
||||
![vCenter showing the BOW and DRE clusters](/images/posts-2020/KUCwEgEhN.png)
|
||||
|
||||
Those hosts have one virtual NIC each on a standard switch connected to my home network, and a second NIC each connected to the ["isolated" internal lab network](vmware-home-lab-on-intel-nuc-9#networking) with all the VLANs for the guests to run on:
|
||||
![dvSwitch showing attached hosts and dvPortGroups](/assets/images/posts-2020/y8vZEnWqR.png)
|
||||
![dvSwitch showing attached hosts and dvPortGroups](/images/posts-2020/y8vZEnWqR.png)
|
||||
|
||||
### vRA setup
|
||||
On the vRA side of things, I logged in to the Cloud Assembly portion and went to the Infrastructure tab. I first created a Project named `LAB`, added the vCenter as a Cloud Account, and then created a Cloud Zone for the vCenter as well. On the Compute tab of the Cloud Zone properties, I manually added both the `BOW` and `DRE` clusters.
|
||||
![BOW and DRE Clusters added to Cloud Zone](/assets/images/posts-2020/sCQKUH07e.png)
|
||||
![BOW and DRE Clusters added to Cloud Zone](/images/posts-2020/sCQKUH07e.png)
|
||||
|
||||
I also created a Network Profile and added each of the nested dvPortGroups I had created for this purpose.
|
||||
![Network Profile with added vSphere networks](/assets/images/posts-2020/LST4LisFl.png)
|
||||
![Network Profile with added vSphere networks](/images/posts-2020/LST4LisFl.png)
|
||||
|
||||
Each network also gets associated with the related IP Range which was [imported from {php}IPAM](integrating-phpipam-with-vrealize-automation-8).
|
||||
![IP Range bound to a network](/assets/images/posts-2020/AZsVThaRO.png)
|
||||
![IP Range bound to a network](/images/posts-2020/AZsVThaRO.png)
|
||||
|
||||
Since each of my hosts only has 100GB of datastore and my Windows template specifies a 60GB VMDK, I went ahead and created a Storage Profile so that deployments would default to being Thin Provisioned.
|
||||
![Thin-provision storage profile](/assets/images/posts-2020/3vQER.png)
|
||||
![Thin-provision storage profile](/images/posts-2020/3vQER.png)
|
||||
|
||||
I created a few Flavor Mappings ranging from `micro` (1vCPU|1GB RAM) to `giant` (8vCPU|16GB) but for this resource-constrained lab I'll stick mostly to the `micro`, `tiny` (1vCPU|2GB), and `small` (2vCPU|2GB) sizes.
|
||||
![T-shirt size Flavor Mappings](/assets/images/posts-2020/lodJlc8Hp.png)
|
||||
![T-shirt size Flavor Mappings](/images/posts-2020/lodJlc8Hp.png)
|
||||
|
||||
And I created an Image Mapping named `ws2019` which points to a Windows Server 2019 Core template I have stored in my lab's Content Library (cleverly-named "LABrary" for my own amusement).
|
||||
![Windows Server Image Mapping](/assets/images/posts-2020/6k06ySON7.png)
|
||||
![Windows Server Image Mapping](/images/posts-2020/6k06ySON7.png)
|
||||
|
||||
And with that, my vRA infrastructure is ready for testing a *very* basic deployment.
|
||||
|
||||
### My First Cloud Template
|
||||
Now it's time to leave the Infrastructure tab and visit the Design one, where I'll create a new Cloud Template (what previous versions of vRA called "Blueprints"). I start by dragging one each of the **vSphere > Machine** and **vSphere > Network** entities onto the workspace. I then pop over to the Code tab on the right to throw together some simple YAML statements:
|
||||
![My first Cloud Template!](/assets/images/posts-2020/RtMljqM9x.png)
|
||||
![My first Cloud Template!](/images/posts-2020/RtMljqM9x.png)
|
||||
|
||||
VMware's got a [pretty great document](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-6BA1DA96-5C20-44BF-9C81-F8132B9B4872.html#list-of-input-properties-2) describing the syntax for these input properties, plus a lot of it is kind of self-explanatory. Let's step through this real quick:
|
||||
```yaml
|
||||
|
@ -152,40 +152,40 @@ resources:
|
|||
```
|
||||
|
||||
Cool! But does it work? Hitting the **Test** button at the bottom right is a great way to validate a template before actually running a deployment. That will confirm that the template syntax, infrastructure, and IPAM configuration is all set up correctly to support this particular deployment.
|
||||
![Test inputs](/assets/images/posts-2020/lNmduGWr1.png)
|
||||
![Test results](/assets/images/posts-2020/BA2BWCd6K.png)
|
||||
![Test inputs](/images/posts-2020/lNmduGWr1.png)
|
||||
![Test results](/images/posts-2020/BA2BWCd6K.png)
|
||||
|
||||
Looks good! I like to click on the **Provisioning Diagram** link to see a bit more detail about where components were placed and why. That's also an immensely helpful troubleshooting option if the test *isn't* successful.
|
||||
![Provisioning diagram](/assets/images/posts-2020/PIeW8xA2j.png)
|
||||
![Provisioning diagram](/images/posts-2020/PIeW8xA2j.png)
|
||||
|
||||
And finally, I can hit that **Deploy** button to actually spin up this VM.
|
||||
![Deploy this sucker](/assets/images/posts-2020/XmtEm51h2.png)
|
||||
![Deploy this sucker](/images/posts-2020/XmtEm51h2.png)
|
||||
|
||||
Each deployment has to have a *unique* deployment name. I got tired of trying to keep up with what names I had already used so kind of settled on a [DATE]_[TIME] format for my test deployments. I'll automatic this tedious step away in the future.
|
||||
|
||||
I then confirm that the (automatically-selected default) inputs are correct and kick it off.
|
||||
![Deployment inputs](/assets/images/posts-2020/HC6vQMeVT.png)
|
||||
![Deployment inputs](/images/posts-2020/HC6vQMeVT.png)
|
||||
|
||||
The deployment will take a few minutes. I like to click over to the **History** tab to see a bit more detail as things progress.
|
||||
![Deployment history](/assets/images/posts-2020/uklHiv46Y.png)
|
||||
![Deployment history](/images/posts-2020/uklHiv46Y.png)
|
||||
|
||||
It doesn't take too long for activity to show up on the vSphere side of things:
|
||||
![vSphere is cloning the source template](/assets/images/posts-2020/4dNwfNNDY.png)
|
||||
![vSphere is cloning the source template](/images/posts-2020/4dNwfNNDY.png)
|
||||
|
||||
And there's the completed VM - notice the statically-applied IP address courtesy of {php}IPAM!
|
||||
![Completed test VM](/assets/images/posts-2020/3-UIo1Ykn.png)
|
||||
![Completed test VM](/images/posts-2020/3-UIo1Ykn.png)
|
||||
|
||||
And I can pop over to the IPAM interface to confirm that the IP has been marked as reserved as well:
|
||||
![Newly-created IPAM reservation](/assets/images/posts-2020/mAfdPLKnp.png)
|
||||
![Newly-created IPAM reservation](/images/posts-2020/mAfdPLKnp.png)
|
||||
|
||||
Fantastic! But one of my objectives from earlier was to let the user control where a VM gets provisioned. Fortunately it's pretty easy to implement thanks to vRA 8's use of tags.
|
||||
|
||||
### Using tags for resource placement
|
||||
Just about every entity within vRA 8 can have tags applied to it, and you can leverage those tags in some pretty creative and useful ways. For now, I'll start by applying tags to my compute resources; I'll use `comp:bow` for the "BOW Cluster" and `comp:dre` for the "DRE Cluster".
|
||||
![Compute tags](/assets/images/posts-2020/oz1IAp-i0.png)
|
||||
![Compute tags](/images/posts-2020/oz1IAp-i0.png)
|
||||
|
||||
I'll also use the `net:bow` and `net:dre` tags to logically divide up the networks between my sites:
|
||||
![Network tags](/assets/images/posts-2020/ngSWbVI4Y.png)
|
||||
![Network tags](/images/posts-2020/ngSWbVI4Y.png)
|
||||
|
||||
I can now add an input to the Cloud Template so the user can pick which site they need to deploy to:
|
||||
|
||||
|
@ -226,16 +226,16 @@ resources:
|
|||
```
|
||||
|
||||
So the VM will now only be deployed to the compute resource and networks which are tagged to match the selected Site identifier. I ran another test to make sure I didn't break anything:
|
||||
![Testing against the DRE site](/assets/images/posts-2020/Q-2ZQg_ji.png)
|
||||
![Testing against the DRE site](/images/posts-2020/Q-2ZQg_ji.png)
|
||||
|
||||
It came back successful, so I clicked through to see the provisioning diagram. On the network tab, I see that only the last two networks (`d1650-Servers-4` and `d1660-Servers-5`) were considered since the first three didn't match the required `net:dre` tag:
|
||||
![Network provisioning diagram](/assets/images/posts-2020/XVD9QVU-S.png)
|
||||
![Network provisioning diagram](/images/posts-2020/XVD9QVU-S.png)
|
||||
|
||||
And it's a similar story on the compute tab:
|
||||
![Compute provisioning diagram](/assets/images/posts-2020/URW7vc1ih.png)
|
||||
![Compute provisioning diagram](/images/posts-2020/URW7vc1ih.png)
|
||||
|
||||
As a final test for this change, I kicked off one deployment to each site to make sure things worked as expected.
|
||||
![vSphere showing one VM at each site](/assets/images/posts-2020/VZaK4btzl.png)
|
||||
![vSphere showing one VM at each site](/images/posts-2020/VZaK4btzl.png)
|
||||
|
||||
Nice!
|
||||
|
||||
|
|
|
@ -13,7 +13,7 @@ title: 'vRA8 Custom Provisioning: Part Two'
|
|||
---
|
||||
|
||||
We [last left off this series](vra8-custom-provisioning-part-one) after I'd set up vRA, performed a test deployment off of a minimal cloud template, and then enhanced the simple template to use vRA tags to let the user specify where a VM should be provisioned. But these VMs have kind of dumb names; right now, they're just getting named after the user who requests it + a random couple of digits, courtesy of a simple [naming template defined on the project's Provisioning page](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-AD400ED7-EB3A-4D36-B9A7-81E100FB3003.html?hWord=N4IghgNiBcIHZgLYEs4HMQF8g):
|
||||
![Naming template](/assets/images/posts-2020/zAF26KJnO.png)
|
||||
![Naming template](/images/posts-2020/zAF26KJnO.png)
|
||||
|
||||
I could use this naming template to *almost* accomplish what I need from a naming solution, but I don't like that the numbers are random rather than an sequence (I want to deploy `server001` followed by `server002` rather than `server343` followed by `server718`). And it's not enough for me that a VM's name be unique just within the scope of vRA - the hostname should be unique across my entire environment.
|
||||
|
||||
|
@ -192,10 +192,10 @@ resources:
|
|||
```
|
||||
|
||||
Great! Here's what it looks like on the deployment request:
|
||||
![Deployment request with naming elements](/assets/images/posts-2020/iqsHm5zQR.png)
|
||||
![Deployment request with naming elements](/images/posts-2020/iqsHm5zQR.png)
|
||||
|
||||
...but the deployed VM got named `john-329`. Why?
|
||||
![VM deployed with a lame name](/assets/images/posts-2020/lUo1odZ03.png)
|
||||
![VM deployed with a lame name](/images/posts-2020/lUo1odZ03.png)
|
||||
|
||||
Oh yeah, I need to create a thing that will take these naming elements, mash them together, check for any conflicts, and then apply the new name to the VM. vRealize Orchestrator, it's your time!
|
||||
|
||||
|
@ -203,28 +203,28 @@ Oh yeah, I need to create a thing that will take these naming elements, mash the
|
|||
When I first started looking for a naming solution, I found a [really handy blog post from Michael Poore](https://blog.v12n.io/custom-naming-in-vrealize-automation-8x-1/) that described his solution to doing custom naming. I wound up following his general approach but had to adapt it a bit to make the code work in vRO 8 and to add in the additional checks I wanted. So credit to Michael for getting me pointed in the right direction!
|
||||
|
||||
I start by hopping over to the Orchestrator interface and navigating to the Configurations section. I'm going to create a new configuration folder named `CustomProvisioning` that will store all the Configuration Elements I'll use to configure my workflows on this project.
|
||||
![Configuration Folder](/assets/images/posts-2020/y7JKSxsqE.png)
|
||||
![Configuration Folder](/images/posts-2020/y7JKSxsqE.png)
|
||||
|
||||
Defining certain variables within configurations separates those from the workflows themselves, making the workflows much more portable. That will allow me to transfer the same code between multiple environments (like my homelab and my lab environment at work) without having to rewrite a bunch of hardcoded values.
|
||||
|
||||
Now I'll create a new configuration within the new folder. This will hold information about the naming schema so I name it `namingSchema`. In it, I create two strings to define the base naming format (up to the numbers on the end) and full name format (including the numbers). I define `baseFormat` and `nameFormat` as templates based on what I put together earlier.
|
||||
![The namingSchema configuration](/assets/images/posts-2020/zLec-3X_D.png)
|
||||
![The namingSchema configuration](/images/posts-2020/zLec-3X_D.png)
|
||||
|
||||
I also create another configuration named `computerNames`. When vRO picks a name for a VM, it will record it here as a `number` variable named after the "base name" (`BOW-DAPP-WEB`) and the last-used sequence as the value (`001`). This will make it quick-and-easy to see what the next VM should be named. For now, though, I just need the configuration to not be empty so I add a single variable named `sample` just to take up space.
|
||||
![The computerNames configuration](/assets/images/posts-2020/pqrvUNmsj.png)
|
||||
![The computerNames configuration](/images/posts-2020/pqrvUNmsj.png)
|
||||
|
||||
Okay, now it's time to get messy.
|
||||
|
||||
### The vRO workflow
|
||||
Just like with the configuration elements, I create a new workflow folder named `CustomProvisioning` to keep all my workflows together. And then I make a `VM Provisioning` workflow that will be used for pre-provisioning tasks.
|
||||
![Workflow organization](/assets/images/posts-2020/qt-D1mJFE.png)
|
||||
![Workflow organization](/images/posts-2020/qt-D1mJFE.png)
|
||||
|
||||
On the Inputs/Outputs tab of the workflow, I create a single input named `inputProperties` of type `Properties` which will hold all the information about the deployment coming from the vRA side of things.
|
||||
![inputProperties](/assets/images/posts-2020/tiBJVKYdf.png)
|
||||
![inputProperties](/images/posts-2020/tiBJVKYdf.png)
|
||||
|
||||
#### Logging the input properties
|
||||
The first thing I'll want this workflow to do (particularly for testing) is to tell me about the input data from vRA. That will help to eliminate a lot of guesswork. I could just write a script within the workflow to do that, but creating it as a separate action will make it easier to reuse in other workflows. Behold, the `logPayloadProperties` action (nested within the `net.bowdre.utility` module which contains some spoilers for what else is to come!):
|
||||
![image.png](/assets/images/posts-2020/0fSl55whe.png)
|
||||
![image.png](/images/posts-2020/0fSl55whe.png)
|
||||
|
||||
This action has a single input, a `Properties` object named `payload`. (By the way, vRO is pretty particular about variable typing so going forward I'll reference variables as `variableName (type)`.) Here's the JavaScript that will basically loop through each element and write the contents to the vRO debug log:
|
||||
|
||||
|
@ -261,23 +261,23 @@ function logSingleProperty(name,value,i) {
|
|||
```
|
||||
|
||||
Going back to my VM Provisioning workflow, I drag an Action Element onto the canvas and tie it to my new action, passing in `inputProperties (Properties)` as the input:
|
||||
![image.png](/assets/images/posts-2020/o8CgTjSYm.png)
|
||||
![image.png](/images/posts-2020/o8CgTjSYm.png)
|
||||
|
||||
#### Event Broker Subscription
|
||||
And at this point I save the workflow. I'm not finished with it - not by a long shot! - but this is a great place to get the workflow plumbed up to vRA and run a quick test. So I go to the vRA interface, hit up the Extensibility tab, and create a new subscription. I name it "VM Provisioning" and set it to fire on the "Compute allocation" event, which will happen right before the VM starts getting created. I link in my VM Provisioning workflow, and also set this as a blocking execution so that no other/future workflows will run until this one completes.
|
||||
![VM Provisioning subscription](/assets/images/posts-2020/IzaMb39C-.png)
|
||||
![VM Provisioning subscription](/images/posts-2020/IzaMb39C-.png)
|
||||
|
||||
Alrighty, let's test this and see if it works. I head back to the Design tab and kick off another deployment.
|
||||
![Test deployment](/assets/images/posts-2020/6PA6lIOcP.png)
|
||||
![Test deployment](/images/posts-2020/6PA6lIOcP.png)
|
||||
|
||||
I'm going to go grab some more coffee while this runs.
|
||||
![Successful deployment](/assets/images/posts-2020/Zyha7vAwi.png)
|
||||
![Successful deployment](/images/posts-2020/Zyha7vAwi.png)
|
||||
|
||||
And we're back! Now that the deployment completed successfully, I can go back to the Orchestrator view and check the Workflow Runs section to confirm that the VM Provisioning workflow did fire correctly. I can click on it to get more details, and the Logs tab will show me all the lovely data logged by the `logPayloadProperties` action running from the workflow.
|
||||
![Logged payload properties](/assets/images/posts-2020/AiFwzSpWS.png)
|
||||
![Logged payload properties](/images/posts-2020/AiFwzSpWS.png)
|
||||
|
||||
That information can also be seen on the Variables tab:
|
||||
![So many variables](/assets/images/posts-2020/65ECa7nej.png)
|
||||
![So many variables](/images/posts-2020/65ECa7nej.png)
|
||||
|
||||
A really handy thing about capturing the data this way is that I can use the Run Again or Debug button to execute the vRO workflow again without having to actually deploy a new VM from vRA. This will be great for testing as I press onward.
|
||||
|
||||
|
@ -287,7 +287,7 @@ A really handy thing about capturing the data this way is that I can use the Run
|
|||
I'm going to use this VM Provisioning workflow as a sort of top-level wrapper. This workflow will have a task to parse the payload and grab the variables that will be needed for naming a VM, and it will also have a task to actually rename the VM, but it's going to delegate the name generation to another nested workflow. Making the workflows somewhat modular will make it easier to make changes in the future if needed.
|
||||
|
||||
Anyway, I drop a Scriptable Task item onto the workflow canvas to handle parsing the payload - I'll call it `parse payload` - and pass it `inputProperties (Properties)` as its input.
|
||||
![parse payload task](/assets/images/posts-2020/aQg91t93a.png)
|
||||
![parse payload task](/images/posts-2020/aQg91t93a.png)
|
||||
|
||||
The script for this is pretty straight-forward:
|
||||
|
||||
|
@ -312,13 +312,13 @@ System.debug("Original names: " + originalNames)
|
|||
```
|
||||
|
||||
It creates a new `requestProperties (Properties)` variable to store the limited set of properties that will be needed for naming - `site`, `environment`, `function`, and `app`. It also stores a copy of the original `resourceNames (Array/string)`, which will be useful when we need to replace the old name with the new one. To make those two new variables accessible to other parts of the workflow, I'll need to also create the variables at the workflow level and map them as outputs of this task:
|
||||
![outputs mapped](/assets/images/posts-2020/4B6wN8QeG.png)
|
||||
![outputs mapped](/images/posts-2020/4B6wN8QeG.png)
|
||||
|
||||
I'll also drop in a "Foreach Element" item, which will run a linked workflow once for each item in an input array (`originalNames (Array/string)` in this case). I haven't actually created that nested workflow yet so I'm going to skip selecting that for now.
|
||||
![Nested workflow placeholder](/assets/images/posts-2020/UIafeShcv.png)
|
||||
![Nested workflow placeholder](/images/posts-2020/UIafeShcv.png)
|
||||
|
||||
The final step of this workflow will be to replace the existing contents of `resourceNames (Array/string)` with the new name. I'll do that with another scriptable task element, named `Apply new names`, which takes `inputProperties (Properties)` and `newNames (Array/string)` as inputs and returns `resourceNames (Array/string)` as a workflow output back to vRA. vRA will see that `resourceNames` has changed and it will update the name of the deployed resource (the VM) accordingly.
|
||||
![Apply new names task](/assets/images/posts-2020/h_PHeT6af.png)
|
||||
![Apply new names task](/images/posts-2020/h_PHeT6af.png)
|
||||
|
||||
And here's the script for that task:
|
||||
|
||||
|
@ -338,17 +338,17 @@ Now's a good time to save this workflow (ignoring the warning about it failing v
|
|||
|
||||
### Nested workflow
|
||||
I'm a creative person so I'm going to call this workflow "Generate unique hostname". It's going to receive `requestProperties (Properties)` as its sole input, and will return `nextVmName (String)` as its sole output.
|
||||
![Workflow input and output](/assets/images/posts-2020/5bWfqh4ZSE.png)
|
||||
![Workflow input and output](/images/posts-2020/5bWfqh4ZSE.png)
|
||||
|
||||
I will also need to bind a couple of workflow variables to those configuration elements I created earlier. This is done by creating the variable as usual (`baseFormat (string)` in this case), toggling the "Bind to configuration" option, and then searching for the appropriate configuration. It's important to make sure the selected type matches that of the configuration element - otherwise it won't show up in the list.
|
||||
![Binding a variable to a configuration](/assets/images/posts-2020/PubHnv_jM.png)
|
||||
![Binding a variable to a configuration](/images/posts-2020/PubHnv_jM.png)
|
||||
|
||||
I do the same for the `nameFormat (string)` variable as well.
|
||||
![Configuration variables added to the workflow](/assets/images/posts-2020/7Sb3j2PS3.png)
|
||||
![Configuration variables added to the workflow](/images/posts-2020/7Sb3j2PS3.png)
|
||||
|
||||
#### Task: create lock
|
||||
Okay, on to the schema. This workflow may take a little while to execute, and it would be bad if another deployment came in while it was running - the two runs might both assign the same hostname without realizing it. Fortunately vRO has a locking system which can be used to avoid that. Accordingly, I'll add a scriptable task element to the canvas and call it `create lock`. It will have two inputs used for identifying the lock so that it can be easily removed later on, so I create a new variable `lockOwner (string)` with the value `eventBroker` and another named `lockId (string)` set to `namingLock`.
|
||||
![Task: create lock](/assets/images/posts-2020/G0TEJ30003.png)
|
||||
![Task: create lock](/images/posts-2020/G0TEJ30003.png)
|
||||
|
||||
The script is very short:
|
||||
|
||||
|
@ -363,7 +363,7 @@ LockingSystem.lockAndWait(lockId, lockOwner)
|
|||
|
||||
#### Task: generate hostnameBase
|
||||
We're getting to the meat of the operation now - another scriptable task named `generate hostnameBase` which will take the naming components from the deployment properties and stick them together in the form defined in the `nameFormat (String)` configuration. The inputs will be the existing `nameFormat (String)`, `requestProperties (Properties)`, and `baseFormat (String)` variables, and it will output new `hostnameBase (String)` ("`BOW-DAPP-WEB`") and `digitCount (Number)` ("`3`", one for each `#` in the format) variables. I'll also go ahead and initialize `hostnameSeq (Number)` to `0` to prepare for a later step.
|
||||
![Task: generate hostnameBase](/assets/images/posts-2020/XATryy20y.png)
|
||||
![Task: generate hostnameBase](/images/posts-2020/XATryy20y.png)
|
||||
|
||||
|
||||
```js
|
||||
|
@ -390,19 +390,19 @@ System.debug("Hostname base: " + hostnameBase)
|
|||
|
||||
#### Interlude: connecting vRO to vCenter
|
||||
Coming up, I'm going to want to connect to vCenter so I can find out if there are any existing VMs with a similar name. I'll use the vSphere vCenter Plug-in which is included with vRO to facilitate that, but that means I'll first need to set up that connection. So I'll save the workflow I've been working on (save early, save often) and then go run the preloaded "Add a vCenter Server instance" workflow. The first page of required inputs is pretty self-explanatory:
|
||||
![Add a vCenter Server instance - vCenter properties](/assets/images/posts-2020/6Gpxapzd3.png)
|
||||
![Add a vCenter Server instance - vCenter properties](/images/posts-2020/6Gpxapzd3.png)
|
||||
|
||||
On the connection properties page, I unchecked the per-user connection in favor of using a single service account, the same one that I'm already using for vRA's connection to vCenter.
|
||||
![Add a vCenter Server instance - Connection properties](/assets/images/posts-2020/RuqJhj00_.png)
|
||||
![Add a vCenter Server instance - Connection properties](/images/posts-2020/RuqJhj00_.png)
|
||||
|
||||
After successful completion of the workflow, I can go to Administration > Inventory and confirm that the new endpoint is there:
|
||||
![vCenter plugin endpoint](/assets/images/posts-2020/rUmGPdz2I.png)
|
||||
![vCenter plugin endpoint](/images/posts-2020/rUmGPdz2I.png)
|
||||
|
||||
I've only got the one vCenter in my lab. At work, I've got multiple vCenters so I would need to repeat these steps to add each of them as an endpoint.
|
||||
|
||||
#### Task: prepare vCenter SDK connection
|
||||
Anyway, back to my "Generate unique hostname" workflow, where I'll add another scriptable task to prepare the vCenter SDK connection. This one doesn't require any inputs, but will output an array of `VC:SdkConnection` objects:
|
||||
![Task: prepare vCenter SDK connection](/assets/images/posts-2020/ByIWO66PC.png)
|
||||
![Task: prepare vCenter SDK connection](/images/posts-2020/ByIWO66PC.png)
|
||||
|
||||
```js
|
||||
// JavaScript: prepare vCenter SDK connection
|
||||
|
@ -415,11 +415,11 @@ System.log("Preparing vCenter SDK connection...")
|
|||
|
||||
#### ForEach element: search VMs by name
|
||||
Next, I'm going to drop another ForEach element onto the canvas. For each vCenter endpoint in `sdkConnections (Array/VC:SdkConnection)`, it will execute the workflow titled "Get virtual machines by name with PC". I map the required `vc` input to `*sdkConnections (VC:SdkConnection)`, `filter` to `hostnameBase (String)`, and skip `rootVmFolder` since I don't care where a VM resides. And I create a new `vmsByHost (Array/Array)` variable to hold the output.
|
||||
![ForEach: search VMs by name](/assets/images/posts-2020/mnOxV2udH.png)
|
||||
![ForEach: search VMs by name](/images/posts-2020/mnOxV2udH.png)
|
||||
|
||||
#### Task: unpack results for all hosts
|
||||
That `vmsByHost (Array/array)` object contains any and all VMs which match `hostnameBase (String)`, but they're broken down by the host they're running on. So I use a scriptable task to convert that array-of-arrays into a new array-of-strings containing just the VM names.
|
||||
![Task: unpack results for all hosts](/assets/images/posts-2020/gIEFRnilq.png)
|
||||
![Task: unpack results for all hosts](/images/posts-2020/gIEFRnilq.png)
|
||||
|
||||
```js
|
||||
// JavaScript: unpack results for all hosts
|
||||
|
@ -440,7 +440,7 @@ vmNames = vms.map(function(i) {return (i.displayName).toUpperCase()})
|
|||
|
||||
#### Task: generate hostnameSeq & candidateVmName
|
||||
This scriptable task will check the `computerNames` configuration element we created earlier to see if we've already named a VM starting with `hostnameBase (String)`. If such a name exists, we'll increment the number at the end by one, and return that as a new `hostnameSeq (Number)` variable; if it's the first of its kind, `hostnameSeq (Number)` will be set to `1`. And then we'll combine `hostnameBase (String)` and `hostnameSeq (Number)` to create the new `candidateVmName (String)`. If things don't work out, this script will throw `errMsg (String)` so I need to add that as an output exception binding as well.
|
||||
![Task: generate hostnameSeq & candidateVmName](/assets/images/posts-2020/fWlSrD56N.png)
|
||||
![Task: generate hostnameSeq & candidateVmName](/images/posts-2020/fWlSrD56N.png)
|
||||
|
||||
```js
|
||||
// JavaScript: generate hostnameSeq & candidateVmName
|
||||
|
@ -487,7 +487,7 @@ System.log("Proposed VM name: " + candidateVmName)
|
|||
|
||||
#### Task: check for VM name conflicts
|
||||
Now that I know what I'd like to try to name this new VM, it's time to start checking for any potential conflicts. So this task will compare my `candidateVmName (String)` against the existing `vmNames (Array/string)` to see if there are any collisions. If there's a match, it will set a new variable called `conflict (Boolean)` to `true` and also report the issue through the `errMsg (String)` output exception binding. Otherwise it will move on to the next check.
|
||||
![Task: check for VM name conflicts](/assets/images/posts-2020/qmHszypww.png)
|
||||
![Task: check for VM name conflicts](/images/posts-2020/qmHszypww.png)
|
||||
|
||||
```js
|
||||
// JavaScript: check for VM name conflicts
|
||||
|
@ -509,10 +509,10 @@ System.log("No VM name conflicts found for " + candidateVmName)
|
|||
So what happens if there *is* a naming conflict? This solution wouldn't be very flexible if it just gave up as soon as it encountered a problem. Fortunately, I planned for this - all I need to do in the event of a conflict is to run the `generate hostnameSeq & candidateVmName` task again to increment `hostnameSeq (Number)` by one, use that to create a new `candidateVmName (String)`, and then continue on with the checks.
|
||||
|
||||
So far, all of the workflow elements have been connected with happy blue lines which show the flow when everything is going according to the plan. Remember that `errMsg (String)` from the last task? When that gets thrown, the flow will switch to follow an angry dashed red line (if there is one). After dropping a new scriptable task onto the canvas, I can click on the blue line connecting it to the previous item and then click the red X to make it go away.
|
||||
![So long, Blue Line!](/assets/images/posts-2020/BOIwhMxKy.png)
|
||||
![So long, Blue Line!](/images/posts-2020/BOIwhMxKy.png)
|
||||
|
||||
I can then drag the new element away from the "everything is fine" flow, and connect it to the `check for VM name conflict` element with that angry dashed red line. Once `conflict resolution` completes (successfully), a happy blue line will direct the flow back to `generate hostnameSeq & candidateVmName` so that the sequence can be incremented and the checks performed again. And finally, a blue line will connect the `check for VM name conflict` task's successful completion to the end of the workflow:
|
||||
![Error -> fix it -> try again](/assets/images/posts-2020/dhcjdDo-E.png)
|
||||
![Error -> fix it -> try again](/images/posts-2020/dhcjdDo-E.png)
|
||||
|
||||
All this task really does is clear the `conflict (Boolean)` flag so that's the only output.
|
||||
|
||||
|
@ -529,7 +529,7 @@ So if `check VM name conflict` encounters a collision with an existing VM name i
|
|||
|
||||
#### Task: return nextVmName
|
||||
Assuming that everything has gone according to plan and the workflow has avoided any naming conflicts, it will need to return `nextVmName (String)` back to the `VM Provisioning` workflow. That's as simple as setting it to the last value of `candidateVmName (String)`:
|
||||
![Task: return nextVmName](/assets/images/posts-2020/5QFTPHp5H.png)
|
||||
![Task: return nextVmName](/images/posts-2020/5QFTPHp5H.png)
|
||||
|
||||
```js
|
||||
// JavaScript: return nextVmName
|
||||
|
@ -542,7 +542,7 @@ System.log(" ***** Selecting [" + nextVmName + "] as the next VM name ***** ")
|
|||
|
||||
#### Task: remove lock
|
||||
And we should also remove that lock that we created at the start of this workflow.
|
||||
![Task: remove lock](/assets/images/posts-2020/BhBnBh8VB.png)
|
||||
![Task: remove lock](/images/posts-2020/BhBnBh8VB.png)
|
||||
|
||||
```js
|
||||
// JavaScript remove lock
|
||||
|
@ -557,26 +557,26 @@ Done! Well, mostly. Right now the workflow only actually releases the lock if it
|
|||
|
||||
#### Default error handler
|
||||
I can use a default error handler to capture an abort due to running out of possible names, release the lock (with an exact copy of the `remove lock` task), and return (failed) control back to the parent workflow.
|
||||
![Default error handler](/assets/images/posts-2020/afDacKjVx.png)
|
||||
![Default error handler](/images/posts-2020/afDacKjVx.png)
|
||||
|
||||
Because the error handler will only fire when the workflow has failed catastrophically, I'll want to make sure the parent workflow knows about it. So I'll set the end mode to "Error, throw an exception" and bind it to that `errMsg (String)` variable to communicate the problem back to the parent.
|
||||
![End Mode](/assets/images/posts-2020/R9d8edeFP.png)
|
||||
![End Mode](/images/posts-2020/R9d8edeFP.png)
|
||||
|
||||
#### Finalizing the VM Provisioning workflow
|
||||
When I had dropped the foreach workflow item into the VM Provisioning workflow earlier, I hadn't configured anything but the name. Now that the nested workflow is complete, I need to fill in the blanks:
|
||||
![Generate unique hostname](/assets/images/posts-2020/F0IZHRj-J.png)
|
||||
![Generate unique hostname](/images/posts-2020/F0IZHRj-J.png)
|
||||
|
||||
So for each item in `originalNames (Array/string)`, this will run the workflow named `Generate unique hostname`. The input to the workflow will be `requestProperties (Properties)`, and the output will be `newNames (Array/string)`.
|
||||
|
||||
|
||||
### Putting it all together now
|
||||
Hokay, so. I've got configuration elements which hold the template for how I want servers to be named and also track which names have been used. My cloud template asks the user to input certain details which will be used to create a useful computer name. And I've added an extensibility subscription in Cloud Assembly which will call this vRealize Orchestrator workflow before the VM gets created:
|
||||
![Workflow: VM Provisioning](/assets/images/posts-2020/cONrdrbb6.png)
|
||||
![Workflow: VM Provisioning](/images/posts-2020/cONrdrbb6.png)
|
||||
|
||||
This workflow first logs all the properties obtained from the vRA side of things, then parses the properties to grab the necessary details. It then passes that information to a nested workflow actually generate the hostname. Once it gets a result, it updates the deployment properties with the new name so that vRA can configure the VM accordingly.
|
||||
|
||||
The nested workflow is a bit more complicated:
|
||||
![Workflow: Generate unique hostname](/assets/images/posts-2020/siEJSdeDE.png)
|
||||
![Workflow: Generate unique hostname](/images/posts-2020/siEJSdeDE.png)
|
||||
|
||||
It first creates a lock to ensure there won't be multiple instances of this workflow running simultaneously, and then processes data coming from the "parent" workflow to extract the details needed for this workflow. It smashes together the naming elements (site, environment, function, etc) to create a naming base, then connects to each defined vCenter to compile a list of any VMs with the same base. The workflow then consults a configuration element to see which (if any) similar names have already been used, and generates a suggested VM name based on that. It then consults the existing VM list to see if there might be any collisions; if so, it flags the conflict and loops back to generate a new name and try again. Once the conflicts are all cleared, the suggested VM name is made official and returned back up to the VM Provisioning workflow.
|
||||
|
||||
|
@ -584,16 +584,16 @@ Cool. But does it actually work?
|
|||
|
||||
### Testing
|
||||
Remember how I had tested my initial workflow just to see which variables got captured? Now that the workflow has a bit more content, I can just re-run it without having to actually provision a new VM. After doing so, the logging view reports that it worked!
|
||||
![Sweet success!](/assets/images/posts-2020/eZ1BUfesQ.png)
|
||||
![Sweet success!](/images/posts-2020/eZ1BUfesQ.png)
|
||||
|
||||
I can also revisit the `computerNames` configuration element to see the new name reflected there:
|
||||
![More success](/assets/images/posts-2020/krx8rZMmh.png)
|
||||
![More success](/images/posts-2020/krx8rZMmh.png)
|
||||
|
||||
If I run the workflow again, I should see `DRE-DTST-XXX002` assigned, and I do!
|
||||
![Twice as nice](/assets/images/posts-2020/SOPs3mzTs.png)
|
||||
![Twice as nice](/images/posts-2020/SOPs3mzTs.png)
|
||||
|
||||
And, finally, I can go back to vRA and request a new VM and confirm that the name gets correctly applied to the VM.
|
||||
![#winning](/assets/images/posts-2020/HXrAMJrH.png)
|
||||
![#winning](/images/posts-2020/HXrAMJrH.png)
|
||||
|
||||
It's so beautiful!
|
||||
|
||||
|
|
|
@ -21,23 +21,23 @@ Picking up after [Part Two](vra8-custom-provisioning-part-two), I now have a pre
|
|||
Remember how I [used the built-in vSphere plugin](vra8-custom-provisioning-part-two#interlude-connecting-vro-to-vcenter) to let vRO query my vCenter(s) for VMs with a specific name? And how that required first configuring the vCenter endpoint(s) in vRO? I'm going to take a very similar approach here.
|
||||
|
||||
So as before, I'll first need to run the preinstalled "Add an Active Directory server" workflow:
|
||||
![Add an Active Directory server workflow](/assets/images/posts-2020/uUDJXtWKz.png)
|
||||
![Add an Active Directory server workflow](/images/posts-2020/uUDJXtWKz.png)
|
||||
|
||||
I fill out the Connection tab like so:
|
||||
![Connection tab](/assets/images/posts-2020/U6oMWDal2.png)
|
||||
![Connection tab](/images/posts-2020/U6oMWDal2.png)
|
||||
*I don't have SSL enabled on my homelab AD server so I left that unchecked.*
|
||||
|
||||
On the Authentication tab, I tick the box to use a shared session and specify the service account I'll use to connect to AD. It would be great for later steps if this account has the appropriate privileges to create/delete computer accounts at least within designated OUs.
|
||||
![Authentication tab](/assets/images/posts-2020/7MfV-1uiO.png)
|
||||
![Authentication tab](/images/posts-2020/7MfV-1uiO.png)
|
||||
|
||||
If you've got multiple AD servers, you can use the options on the Alternative Hosts tab to specify those, saving you from having to create a new configuration for each. I've just got the one AD server in my lab, though, so at this point I just hit Run.
|
||||
|
||||
Once it completes successfully, I can visit the Inventory section of the vRO interface to confirm that the new Active Directory endpoint shows up:
|
||||
![New AD endpoint](/assets/images/posts-2020/vlnle_ekN.png)
|
||||
![New AD endpoint](/images/posts-2020/vlnle_ekN.png)
|
||||
|
||||
#### checkForAdConflict Action
|
||||
Since I try to keep things modular, I'm going to write a new vRO action within the `net.bowdre.utility` module called `checkForAdConflict` which can be called from the `Generate unique hostname` workflow. It will take in `computerName (String)` as an input and return a boolean `True` if a conflict is found or `False` if the name is available.
|
||||
![Action: checkForAdConflict](/assets/images/posts-2020/JT7pbzM-5.png)
|
||||
![Action: checkForAdConflict](/images/posts-2020/JT7pbzM-5.png)
|
||||
|
||||
It's basically going to loop through the Active Directory hosts defined in vRO and search each for a matching computer name. Here's the full code:
|
||||
|
||||
|
@ -62,7 +62,7 @@ for each (var adHost in adHosts) {
|
|||
|
||||
#### Adding it to the workflow
|
||||
Now I can pop back over to my massive `Generate unique hostname` workflow and drop in a new scriptable task between the `check for VM name conflicts` and `return nextVmName` tasks. It will bring in `candidateVmName (String)` as well as `conflict (Boolean)` as inputs, return `conflict (Boolean)` as an output, and `errMsg (String)` will be used for exception handling. If `errMsg (String)` is thrown, the flow will follow the dashed red line back to the `conflict resolution` action.
|
||||
![Action: check for AD conflict](/assets/images/posts-2020/iB1bjdC8C.png)
|
||||
![Action: check for AD conflict](/images/posts-2020/iB1bjdC8C.png)
|
||||
|
||||
I'm using this as a scriptable task so that I can do a little bit of processing before I call the action I created earlier - namely, if `conflict (Boolean)` was already set, the task should skip any further processing. That does mean that I'll need to call the action by both its module and name using `System.getModule("net.bowdre.utility").checkForAdConflict(candidateVmName)`. So here's the full script:
|
||||
|
||||
|
@ -177,15 +177,15 @@ checkDnsConflicts.zip handler.ps1 Modules
|
|||
|
||||
#### checkForDnsConflict action (Deprecated)
|
||||
And now I can go into vRO, create a new action called `checkForDnsConflict` inside my `net.bowdre.utilities` module. This time, I change the Language to `PowerCLI 12 (PowerShell 7.0)` and switch the Type to `Zip` to reveal the Import button.
|
||||
![Preparing to import the zip](/assets/images/posts-2020/sjCtvoZA0.png)
|
||||
![Preparing to import the zip](/images/posts-2020/sjCtvoZA0.png)
|
||||
|
||||
Clicking that button lets me browse to the file I need to import. I can also set up the two input variables that the script requires, `hostname (String)` and `domain (String)`.
|
||||
![Package imported and variables defined](/assets/images/posts-2020/xPvBx3oVX.png)
|
||||
![Package imported and variables defined](/images/posts-2020/xPvBx3oVX.png)
|
||||
|
||||
#### Adding it to the workflow
|
||||
Just like with the `check for AD conflict` action, I'll add this onto the workflow as a scriptable task, this time between that action and the `return nextVmName` one. This will take `candidateVmName (String)`, `conflict (Boolean)`, and `requestProperties (Properties)` as inputs, and will return `conflict (Boolean)` as its sole output. The task will use `errMsg (String)` as its exception binding, which will divert flow via the dashed red line back to the `conflict resolution` task.
|
||||
|
||||
![Task: check for DNS conflict](/assets/images/posts-2020/uSunGKJfH.png)
|
||||
![Task: check for DNS conflict](/images/posts-2020/uSunGKJfH.png)
|
||||
|
||||
_[Update] The below script has been altered to drop the unneeded call to my homemade `checkForDnsConflict` action and instead use the built-in `System.resolveHostName()`. Thanks @powertim!_
|
||||
|
||||
|
@ -211,15 +211,15 @@ if (conflict) {
|
|||
|
||||
### Testing
|
||||
Once that's all in place, I kick off another deployment to make sure that everything works correctly. After it completes, I can navigate to the **Extensibility > Workflow runs** section of the vRA interface to review the details:
|
||||
![Workflow run success](/assets/images/posts-2020/GZKQbELfM.png)
|
||||
![Workflow run success](/images/posts-2020/GZKQbELfM.png)
|
||||
|
||||
It worked!
|
||||
|
||||
But what if there *had* been conflicts? It's important to make sure that works too. I know that if I run that deployment again, the VM will get named `DRE-DTST-XXX008` and then `DRE-DTST-XXX009`. So I'm going to force conflicts by creating an AD object for one and a DNS record for the other.
|
||||
![Making conflicts](/assets/images/posts-2020/6HBIUf6KE.png)
|
||||
![Making conflicts](/images/posts-2020/6HBIUf6KE.png)
|
||||
|
||||
And I'll kick off another deployment and see what happens.
|
||||
![Workflow success even with conflicts](/assets/images/posts-2020/K6vcxpDj8.png)
|
||||
![Workflow success even with conflicts](/images/posts-2020/K6vcxpDj8.png)
|
||||
|
||||
The workflow saw that the last VM was created as `-007` so it first grabbed `-008`. It saw that `-008` already existed in AD so incremented up to try `-009`. The workflow then found that a record for `-009` was present in DNS so bumped it up to `-010`. That name finally passed through the checks and so the VM was deployed with the name `DRE-DTST-XXX010`. Success!
|
||||
|
||||
|
|
|
@ -20,28 +20,28 @@ This post will add in some "front-end" operations, like creating a customized VM
|
|||
So far, I've been working either in the Cloud Assembly or Orchestrator UIs, both of which are really geared toward administrators. Now I'm going to be working with Service Broker which will provide the user-facing front-end. This is where "normal" users will be able to submit provisioning requests without having to worry about any of the underlying infrastructure or orchestration.
|
||||
|
||||
Before I can do anything with my Cloud Template in the Service Broker UI, though, I'll need to release it from Cloud Assembly. I do this by opening the template on the *Design* tab and clicking the *Version* button at the bottom of the screen. I'll label this as `1.0` and tick the checkbox to *Release this version to the catalog*.
|
||||
![Releasing the Cloud Template to the Service Broker catalog](/assets/images/posts-2020/0-9BaWJqq.png)
|
||||
![Releasing the Cloud Template to the Service Broker catalog](/images/posts-2020/0-9BaWJqq.png)
|
||||
|
||||
I can then go to the Service Broker UI and add a new Content Source for my Cloud Assembly templates.
|
||||
![Add a new Content Source](/assets/images/posts-2020/4X1dPG_Rq.png)
|
||||
![Adding a new Content Source](/assets/images/posts-2020/af-OEP5Tu.png)
|
||||
![Add a new Content Source](/images/posts-2020/4X1dPG_Rq.png)
|
||||
![Adding a new Content Source](/images/posts-2020/af-OEP5Tu.png)
|
||||
After hitting the *Create & Import* button, all released Cloud Templates in the selected Project will show up in the Service Broker *Content* section:
|
||||
![New content!](/assets/images/posts-2020/Hlnnd_8Ed.png)
|
||||
![New content!](/images/posts-2020/Hlnnd_8Ed.png)
|
||||
|
||||
In order for users to deploy from this template, I also need to go to *Content Sharing*, select the Project, and share the content. This can be done either at the Project level or by selecting individual content items.
|
||||
![Content sharing](/assets/images/posts-2020/iScnhmzVY.png)
|
||||
![Content sharing](/images/posts-2020/iScnhmzVY.png)
|
||||
|
||||
That template now appears on the Service Broker *Catalog* tab:
|
||||
![Catalog items](/assets/images/posts-2020/09faF5-Fm.png)
|
||||
![Catalog items](/images/posts-2020/09faF5-Fm.png)
|
||||
|
||||
That's cool and all, and I could go ahead and request a deployment off of that catalog item right now - but I'm really interested in being able to customize the request form. I do that by clicking on the little three-dot menu icon next to the Content entry and selecting the *Customize form* option.
|
||||
![Customize form](/assets/images/posts-2020/ZPsS0oZuc.png)
|
||||
![Customize form](/images/posts-2020/ZPsS0oZuc.png)
|
||||
|
||||
When you start out, the custom form kind of jumbles up the available fields. So I'm going to start by dragging-and-dropping the fields to resemble the order defined in the Cloud Template:
|
||||
![image.png](/assets/images/posts-2020/oLwUg1k6T.png)
|
||||
![image.png](/images/posts-2020/oLwUg1k6T.png)
|
||||
|
||||
In addition to rearranging the request form fields, Custom Forms also provide significant control over how the form behaves. You can change how a field is displayed, define default values, make fields dependent upon other fields and more. For instance, all of my templates and resources belong to a single project so making the user select the project (from a set of 1) is kind of redundant. Every deployment has to be tied to a project so I can't just remove that field, but I can select the "Project" field on the canvas and change its *Visibility* to "No" to hide it. It will silently pass along the correct project ID in the background without cluttering up the form.
|
||||
![Hiding the Project field](/assets/images/posts-2020/4flvfGC54.png)
|
||||
![Hiding the Project field](/images/posts-2020/4flvfGC54.png)
|
||||
|
||||
How about that Deployment Name field? In my tests, I'd been manually creating a string of numbers to uniquely identify the deployment, but I'm not going to ask my users to do that. Instead, I'll leverage another great capability of Custom Forms - tying a field value to a result of a custom vRO action!
|
||||
|
||||
|
@ -49,10 +49,10 @@ How about that Deployment Name field? In my tests, I'd been manually creating a
|
|||
*[Update] I've since come up with what I think is a better approach to handling this. Check it out [here](vra8-automatic-deployment-naming-another-take)!*
|
||||
|
||||
That means it's time to dive back into the vRealize Orchestrator interface and whip up a new action for this purpose. I created a new action within my existing `net.bowdre.utility` module called `createDeploymentName`.
|
||||
![createDeploymentName action](/assets/images/posts-2020/GMCWhns7u.png)
|
||||
![createDeploymentName action](/images/posts-2020/GMCWhns7u.png)
|
||||
|
||||
A good deployment name *must* be globally unique, and it would be great if it could also convey some useful information like who requested the deployment, which template it is being deployed from, and the purpose of the server. The `siteCode (String)`, `envCode (String)`, `functionCode (String)`, and `appCode (String)` variables from the request form will do a great job of describing the server's purpose. I can also pass in some additional information from the Service Broker form like `catalogItemName (String)` to get the template name and `requestedByName (String)` to identify the user making the request. So I'll set all those as inputs to my action:
|
||||
![createDeploymentName inputs](/assets/images/posts-2020/bCKrtn05o.png)
|
||||
![createDeploymentName inputs](/images/posts-2020/bCKrtn05o.png)
|
||||
|
||||
I also went ahead and specified that the action will return a String.
|
||||
|
||||
|
@ -76,10 +76,10 @@ return deploymentName
|
|||
```
|
||||
|
||||
With that sorted, I can go back to the Service Broker interface to modify the custom form a bit more. I select the "Deployment Name" field and click over to the Values tab on the right. There, I set the *Value source* to "External source" and *Select action* to the new action I just created, `net.bowdre.utility/createDeploymentName`. (If the action doesn't appear in the search field, go to *Infrastructure > Integrations > Embedded-VRO* and click the "Start Data Collection" button to force vRA to update its inventory of vRO actions and workflows.) I then map all the action's inputs to properties available on the request form.
|
||||
![Linking the action](/assets/images/posts-2020/mpbPukEeB.png)
|
||||
![Linking the action](/images/posts-2020/mpbPukEeB.png)
|
||||
|
||||
The last step before testing is to click that *Enable* button to activate the custom form, and then the *Save* button to save my work. So did it work? Let's head to the *Catalog* tab and open the request:
|
||||
![Screen recording 2021-05-10 17.01.37.gif](/assets/images/posts-2020/tybyj-5dG.gif)
|
||||
![Screen recording 2021-05-10 17.01.37.gif](/images/posts-2020/tybyj-5dG.gif)
|
||||
|
||||
Cool! So it's dynamically generating the deployment name based on selections made on the form. Now that it works, I can go back to the custom form and set the "Deployment Name" field to be invisible just like the "Project" one.
|
||||
|
||||
|
@ -116,15 +116,15 @@ I *could* just use those tags to let users pick the appropriate network, but I'v
|
|||
| d1650-Servers-4 | `net:dre`, `net:front`, `net:dre-front-172.16.50.0` |
|
||||
| d1660-Servers-5 | `net:dre`, `net:back`, `net:dre-back-172.16.60.0` |
|
||||
|
||||
![Tagged networks](/assets/images/posts-2020/J_RG9JNPz.png)
|
||||
![Tagged networks](/images/posts-2020/J_RG9JNPz.png)
|
||||
|
||||
So I can now use a single tag to positively identify a single network, as long as I know its site and either its purpose or its IP space. I'll reference these tags in a vRO action that will populate a dropdown in the request form with the available networks for the selected site. Unfortunately I couldn't come up with an easy way to dynamically pull the tags into vRO so I create another Configuration Element to store them:
|
||||
![networksPerSite configuration element](/assets/images/posts-2020/xfEultDM_.png)
|
||||
![networksPerSite configuration element](/images/posts-2020/xfEultDM_.png)
|
||||
|
||||
This gets filed under the existing `CustomProvisioning` folder, and I name it `networksPerSite`. Each site gets a new variable of type `Array/string`. The name of the variable matches the site ID, and the contents are just the tags minus the `net:` prefix.
|
||||
|
||||
I created a new action named (appropriately) `getNetworksForSite`. This will accept `siteCode (String)` as its input from the Service Broker request form, and will return an array of strings containing the available networks.
|
||||
![getNetworksForSite action](/assets/images/posts-2020/IdrT-Un8H1.png)
|
||||
![getNetworksForSite action](/images/posts-2020/IdrT-Un8H1.png)
|
||||
|
||||
```js
|
||||
// JavaScript: getNetworksForSite
|
||||
|
@ -186,31 +186,31 @@ resources:
|
|||
Remember that the `networksPerSite` configuration element contains the portion of the tags *after* the `net:` prefix so that's why I include the prefix in the constraint tag here. I just didn't want it to appear in the selection dropdown.
|
||||
|
||||
After making this change to the Cloud Template I use the "Create Version" button again to create a new version and tick the option to release it so that it can be picked up by Service Broker.
|
||||
![Another new version](/assets/images/posts-2020/REZ08yA2E.png)
|
||||
![Another new version](/images/posts-2020/REZ08yA2E.png)
|
||||
|
||||
Back on the Service Broker UI, I hit my `LAB` Content Source again to Save & Import the new change, and then go to customize the form for `WindowsDemo` again. After dragging-and-dropping the new `Network` field onto the request form blueprint, I kind of repeat the steps I used for adjusting the Deployment Name field earlier. On the Appearance tab I set it to be a DropDown, and on the Values tab I set it to an external source, `net.bowdre.utility/getNetworksForSite`. This action only needs a single input so I map `Site` on the request form to the `siteCode` input.
|
||||
![Linking the Network field to the getNetworksForSite action](/assets/images/posts-2020/CDy518peA.png)
|
||||
![Linking the Network field to the getNetworksForSite action](/images/posts-2020/CDy518peA.png)
|
||||
|
||||
Now I can just go back to the Catalog tab and request a new deployment to check out my--
|
||||
![Ew, an ugly error](/assets/images/posts-2020/zWFTuOYOG.png)
|
||||
![Ew, an ugly error](/images/posts-2020/zWFTuOYOG.png)
|
||||
|
||||
Oh yeah. That vRO action gets called as soon as the request form loads - before selecting the required site code as an input. I could modify the action so that returns an empty string if the site hasn't been selected yet, but I'm kind of lazy so I'll instead just modify the custom form so that the Site field defaults to the `BOW` site.
|
||||
![BOW is default](/assets/images/posts-2020/yb77nH2Fp.png)
|
||||
![BOW is default](/images/posts-2020/yb77nH2Fp.png)
|
||||
|
||||
*Now* I can open up the request form and see how well it works:
|
||||
![Network selection in action](/assets/images/posts-2020/fh37T__nb.gif)
|
||||
![Network selection in action](/images/posts-2020/fh37T__nb.gif)
|
||||
|
||||
Noice!
|
||||
|
||||
### Putting it all together now
|
||||
At this point, I'll actually kick off a deployment and see how everything works out.
|
||||
![The request](/assets/images/posts-2020/hFPeakMxn.png)
|
||||
![The request](/images/posts-2020/hFPeakMxn.png)
|
||||
|
||||
After hitting Submit, I can see that this deployment has a much more friendly name than the previous ones:
|
||||
![Auto generated deployment name!](/assets/images/posts-2020/TQGyrUqIx.png)
|
||||
![Auto generated deployment name!](/images/posts-2020/TQGyrUqIx.png)
|
||||
|
||||
And I can also confirm that the VM got named appropriately (based on the [naming standard I implemented earlier](vra8-custom-provisioning-part-two)), and it also got placed on the `172.16.60.0/24` network I selected.
|
||||
![Network placement - check!](/assets/images/posts-2020/1NJvDeA7r.png)
|
||||
![Network placement - check!](/images/posts-2020/1NJvDeA7r.png)
|
||||
|
||||
Very slick. And I think that's a great stopping point for today.
|
||||
|
||||
|
|
|
@ -16,10 +16,10 @@ toc: false
|
|||
A [few days ago](vra8-custom-provisioning-part-four#automatic-deployment-naming), I shared how I combined a Service Broker Custom Form with a vRO action to automatically generate a unique and descriptive deployment name based on user inputs. That approach works *fine* but while testing some other components I realized that calling that action each time a user makes a selection isn't necessarily ideal. After a bit of experimentation, I settled on what I believe to be a better solution.
|
||||
|
||||
Instead of setting the "Deployment Name" field to use an External Source (vRO), I'm going to configure it to use a Computed Value. This is a bit less flexible, but all the magic happens right there in the form without having to make an expensive vRO call.
|
||||
![Computed Value option](/assets/images/posts-2020/Ivv0ia8oX.png)
|
||||
![Computed Value option](/images/posts-2020/Ivv0ia8oX.png)
|
||||
|
||||
After setting `Value source` to `Computed value`, I also set the `Operation` to `Concatenate` (since it is, after all, the only operation choice. I can then use the **Add Value** button to add some fields. Each can be either a *Constant* (like a separator) or linked to a *Field* on the request form. By combining those, I can basically reconstruct the same arrangement that I was previously generating with vRO:
|
||||
![Fields and Constants!](/assets/images/posts-2020/zN3EN6lrG.png)
|
||||
![Fields and Constants!](/images/posts-2020/zN3EN6lrG.png)
|
||||
|
||||
So this will generate a name that looks something like `[user]_[catalog_item]_[site]-[env][function]-[app]`, all without having to call vRO! That gets me pretty close to what I want... but there's always the chance that the generated name won't be truly unique. Being able to append a timestamp on to the end would be a big help here.
|
||||
|
||||
|
@ -37,20 +37,20 @@ return result
|
|||
```
|
||||
|
||||
I then drag a Text Field called `Timestamp` onto the Service Broker Custom Form canvas, and set it to not be visible:
|
||||
![Invisible timestamp](/assets/images/posts-2020/rtTeG3ZoR.png)
|
||||
![Invisible timestamp](/images/posts-2020/rtTeG3ZoR.png)
|
||||
|
||||
And I set it to pull its value from my new `net.bowdre.utility/getTimestamp` action:
|
||||
![Calling the action](/assets/images/posts-2020/NoN-72Qf6.png)
|
||||
![Calling the action](/images/posts-2020/NoN-72Qf6.png)
|
||||
|
||||
Now when the form loads, this field will store a timestamp with thousandths-of-a-second precision.
|
||||
|
||||
The last step is to return to the Deployment Name field and link in the new Timestamp field so that it will get tacked on to the end of the generated name.
|
||||
![Linked!](/assets/images/posts-2020/wl-WPQpEl.png)
|
||||
![Linked!](/images/posts-2020/wl-WPQpEl.png)
|
||||
|
||||
The old way looked like this, where it had to churn a bit after each selection:
|
||||
![The Churn](/assets/images/posts-2020/vH-npyz9s.gif)
|
||||
![The Churn](/images/posts-2020/vH-npyz9s.gif)
|
||||
|
||||
Here's the newer approach, which feels much snappier:
|
||||
![Snappy!](/assets/images/posts-2020/aumfETl1l.gif)
|
||||
![Snappy!](/images/posts-2020/aumfETl1l.gif)
|
||||
|
||||
Not bad! Now I can make the Deployment Name field hidden again and get back to work!
|
|
@ -21,9 +21,9 @@ https://packages.vmware.com/photon/4.0/GA/ova/photon-hw13-uefi-4.0-1526e30ba0.ov
|
|||
```
|
||||
|
||||
Then I went into vCenter, hit the **Deploy OVF Template** option, and pasted in the URL:
|
||||
![Deploying the OVA straight from the internet](/assets/images/posts-2020/Es90-kFW9.png)
|
||||
![Deploying the OVA straight from the internet](/images/posts-2020/Es90-kFW9.png)
|
||||
This lets me skip the kind of tedious "download file from internet and then upload file to vCenter" dance, and I can then proceed to click through the rest of the deployment options.
|
||||
![Ready to deploy](/assets/images/posts-2020/rCpaTbPX5.png)
|
||||
![Ready to deploy](/images/posts-2020/rCpaTbPX5.png)
|
||||
|
||||
Once the VM is created, I power it on and hop into the web console. The default root username is `changeme`, and I'll of course be forced to change that the first time I log in.
|
||||
|
||||
|
@ -57,10 +57,10 @@ IPv6AcceptRA=no
|
|||
I set the required permissions on my new network configuration file with `chmod 644 /etc/systemd/network/10-static-en.network` and then restarted `networkd` with `systemctl restart systemd-networkd`.
|
||||
|
||||
I then ran `networkctl` a couple of times until the `eth0` interface went fully green, and did an `ip a` to confirm that the address had been applied.
|
||||
![Verifying networking](/assets/images/posts-2020/qOw7Ysj3O.png)
|
||||
![Verifying networking](/images/posts-2020/qOw7Ysj3O.png)
|
||||
|
||||
One last little bit of housekeeping is to change the hostname with `hostnamectl set-hostname adguard` and then reboot for good measure. I can then log in via SSH to continue the setup.
|
||||
![SSH login](/assets/images/posts-2020/NOyfgjjUy.png)
|
||||
![SSH login](/images/posts-2020/NOyfgjjUy.png)
|
||||
|
||||
Now that I'm in, I run `tdnf update` to make sure the VM is fully up to date.
|
||||
|
||||
|
@ -147,10 +147,10 @@ Creating adguard ... done
|
|||
|
||||
### Post-deploy configuration
|
||||
Next, I point a web browser to `http://adguard.lab.bowdre.net:3000` to perform the initial (minimal) setup:
|
||||
![Initial config screen](/assets/images/posts-2020/UHvtv1DrT.png)
|
||||
![Initial config screen](/images/posts-2020/UHvtv1DrT.png)
|
||||
|
||||
Once that's done, I can log in to the dashboard at `http://adguard.lab.bowdre.net/login.html`:
|
||||
![Login page](/assets/images/posts-2020/34xD8tbli.png)
|
||||
![Login page](/images/posts-2020/34xD8tbli.png)
|
||||
|
||||
AdGuard Home ships with pretty sensible defaults so there's not really a huge need to actually do a lot of configuration. Any changes that I *do* do will be saved in `~/adguard/confdir/AdGuardHome.yaml` so they will be preserved across container changes.
|
||||
|
||||
|
@ -161,15 +161,15 @@ Normally, you'd tell your Wifi router what DNS server you want to use, and it wo
|
|||
I already have Google Wifi set up to use my Windows DC (at `192.168.1.5`) for DNS. That lets me easily access systems on my internal `lab.bowdre.net` domain without having to manually configure DNS, and the DC forwards resolution requests it can't handle on to the upstream (internet) DNS servers.
|
||||
|
||||
To easily insert my AdGuard Home instance into the flow, I pop in to my Windows DC and configure the AdGuard Home address (`192.168.1.2`) as the primary DNS forwarder. The DC will continue to handle internal resolutions, and anything it can't handle will now get passed up the chain to AdGuard Home. And this also gives me a bit of a failsafe, in that queries will fail back to the previously-configured upstream DNS if AdGuard Home doesn't respond within a few seconds.
|
||||
![Setting AdGuard Home as a forwarder](/assets/images/posts-2020/bw09OXG7f.png)
|
||||
![Setting AdGuard Home as a forwarder](/images/posts-2020/bw09OXG7f.png)
|
||||
|
||||
It's working!
|
||||
![Requests!](/assets/images/posts-2020/HRRpFOKuN.png)
|
||||
![Requests!](/images/posts-2020/HRRpFOKuN.png)
|
||||
|
||||
|
||||
### Caveat
|
||||
Chaining my DNS configurations in this way (router -> DC -> AdGuard Home -> internet) does have a bit of a limitation, in that all queries will appear to come from the Windows server:
|
||||
![Only client](/assets/images/posts-2020/OtPGufxlP.png)
|
||||
![Only client](/images/posts-2020/OtPGufxlP.png)
|
||||
I won't be able to do any per-client filtering as a result, but honestly I'm okay with that as I already use the "Pause Internet" option in Google Wifi to block outbound traffic from certain devices anyway. And using the Windows DNS as an intermediary makes it significantly quicker and easier to switch things up if I run into problems later; changing the forwarder here takes effect instantly rather than having to manually update all of my clients or wait for DHCP to distribute the change.
|
||||
|
||||
I have worked around this in the past by [bypassing Google Wifi's DHCP](https://www.mbreviews.com/pi-hole-google-wifi-raspberry-pi/) but I think it was actually more trouble than it was worth to me.
|
||||
|
@ -177,6 +177,6 @@ I have worked around this in the past by [bypassing Google Wifi's DHCP](https://
|
|||
|
||||
### One last thing...
|
||||
I'm putting a lot of responsibility on both of these VMs, my Windows DC and my new AdGuard Home instance. If they aren't up, I won't have internet access, and that would be a shame. I already have my ESXi host configured to automatically start up when power is (re)applied, so I also adjust the VM Startup/Shutdown Configuration so that AdGuard Home will automatically boot after ESXi is loaded, followed closely by the Windows DC (and the rest of my virtualized infrastructure):
|
||||
![Auto Start-up Options](/assets/images/posts-2020/clE6OVmjp.png)
|
||||
![Auto Start-up Options](/images/posts-2020/clE6OVmjp.png)
|
||||
|
||||
So there you have it. Simple DNS-based ad-blocking running on a minimal container-optimized VM that *should* be more stable than the add-on tacked on to my Home Assistant instance. Enjoy!
|
|
@ -18,7 +18,7 @@ In this post, I'll describe how to get certain details from the Service Broker r
|
|||
|
||||
### New inputs
|
||||
I'll start this by adding a few new inputs to the cloud template in Cloud Assembly.
|
||||
![New inputs in Cloud Assembly](/assets/images/posts-2020/F3Wkd3VT.png)
|
||||
![New inputs in Cloud Assembly](/images/posts-2020/F3Wkd3VT.png)
|
||||
|
||||
I'm using a basic regex on the `poc_email` field to make sure that the user's input is *probably* a valid email address in the format `[some string]@[some string].[some string]`.
|
||||
|
||||
|
@ -47,7 +47,7 @@ inputs:
|
|||
```
|
||||
|
||||
I'll also need to add these to the `resources` section of the template so that they will get passed along with the deployment properties.
|
||||
![New resource properties](/assets/images/posts-2020/N7YllJkxS.png)
|
||||
![New resource properties](/images/posts-2020/N7YllJkxS.png)
|
||||
|
||||
I'm actually going to combine the `poc_name` and `poc_email` fields into a single `poc` string.
|
||||
|
||||
|
@ -64,21 +64,21 @@ resources:
|
|||
```
|
||||
|
||||
I'll save this as a new version so that the changes will be available in the Service Broker front-end.
|
||||
![New template version](/assets/images/posts-2020/Z2aKLsLou.png)
|
||||
![New template version](/images/posts-2020/Z2aKLsLou.png)
|
||||
|
||||
### Service Broker custom form
|
||||
I can then go to Service Broker and drag the new fields onto the Custom Form canvas. (If the new fields don't show up, hit up the Content Sources section of Service Broker, select the content source, and click the "Save and Import" button to sync the changes.) While I'm at it, I set the Description field to display as a text area (encouraging more detailed input), and I also set all the fields on the form to be required.
|
||||
![Service Broker form](/assets/images/posts-2020/unhgNySSzz.png)
|
||||
![Service Broker form](/images/posts-2020/unhgNySSzz.png)
|
||||
|
||||
### vRO workflow
|
||||
Okay, so I've got the information I want to pass on to vCenter. Now I need to whip up a new workflow in vRO that will actually do that (after [telling vRO how to connect to the vCenter](vra8-custom-provisioning-part-two#interlude-connecting-vro-to-vcenter), of course). I'll want to call this after the VM has been provisioned, so I'll cleverly call the workflow "VM Post-Provisioning".
|
||||
![image.png](/assets/images/posts-2020/X9JhgWx8x.png)
|
||||
![image.png](/images/posts-2020/X9JhgWx8x.png)
|
||||
|
||||
The workflow will have a single input from vRA, `inputProperties` of type `Properties`.
|
||||
![image.png](/assets/images/posts-2020/zHrp6GPcP.png)
|
||||
![image.png](/images/posts-2020/zHrp6GPcP.png)
|
||||
|
||||
The first thing this workflow needs to do is parse `inputProperties (Properties)` to get the name of the VM, and it will then use that information to query vCenter and grab the corresponding VM object. So I'll add a scriptable task item to the workflow canvas and call it `Get VM Object`. It will take `inputProperties (Properties)` as its sole input, and output a new variable called `vm` of type `VC:VirtualMachine`.
|
||||
![image.png](/assets/images/posts-2020/5ATk99aPW.png)
|
||||
![image.png](/images/posts-2020/5ATk99aPW.png)
|
||||
|
||||
The script for this task is fairly straightforward:
|
||||
```js
|
||||
|
@ -94,7 +94,7 @@ vm = vms[0]
|
|||
```
|
||||
|
||||
I'll add another scriptable task item to the workflow to actually apply the notes to the VM - I'll call it `Set Notes`, and it will take both `vm (VC:VirtualMachine)` and `inputProperties (Properties)` as its inputs.
|
||||
![image.png](/assets/images/posts-2020/w24V6YVOR.png)
|
||||
![image.png](/images/posts-2020/w24V6YVOR.png)
|
||||
|
||||
The first part of the script creates a new VM config spec, inserts the description into the spec, and then reconfigures the selected VM with the new spec.
|
||||
|
||||
|
@ -119,17 +119,17 @@ System.getModule("com.vmware.library.vc.customattribute").setOrCreateCustomField
|
|||
|
||||
### Extensibility subscription
|
||||
Now I need to return to Cloud Assembly and create a new extensibility subscription that will call this new workflow at the appropriate time. I'll call it "VM Post-Provisioning" and attach it to the "Compute Post Provision" topic.
|
||||
![image.png](/assets/images/posts-2020/PmhVOWJsUn.png)
|
||||
![image.png](/images/posts-2020/PmhVOWJsUn.png)
|
||||
|
||||
And then I'll link it to my new workflow:
|
||||
![image.png](/assets/images/posts-2020/cEbWSOg00.png)
|
||||
![image.png](/images/posts-2020/cEbWSOg00.png)
|
||||
|
||||
### Testing
|
||||
And then back to Service Broker to request a VM and see if it works:
|
||||
|
||||
![image.png](/assets/images/posts-2020/Lq9DBCK_Y.png)
|
||||
![image.png](/images/posts-2020/Lq9DBCK_Y.png)
|
||||
|
||||
It worked!
|
||||
![image.png](/assets/images/posts-2020/-Fuvz-GmF.png)
|
||||
![image.png](/images/posts-2020/-Fuvz-GmF.png)
|
||||
|
||||
In the future, I'll be exploring more features that I can add on to this "VM Post-Provisioning" workflow like creating static DNS records as needed.
|
||||
|
|
|
@ -20,21 +20,21 @@ As usual, it took quite a bit of fumbling about before I got everything working
|
|||
|
||||
### Instance creation
|
||||
Getting a VM spun up on Oracle Cloud was a pretty simple process. I logged into my account, navigated to *Menu -> Compute -> Instances*, and clicked on the big blue **Create Instance** button.
|
||||
![Create Instance](/assets/images/posts-2020/8XAB60aqk.png)
|
||||
![Create Instance](/images/posts-2020/8XAB60aqk.png)
|
||||
|
||||
I'll be hosting this for my `bowdre.net` domain, so I start by naming the instance accordingly: `matrix.bowdre.net`. Naming it isn't strictly necessary, but it does help with keeping track of things. The instance defaults to using an Oracle Linux image. I'd rather use an Ubuntu one for this, simply because I was able to find more documentation on getting Synapse going on Debian-based systems. So I hit the **Edit** button next to *Image and Shape*, select the **Change Image** option, pick **Canonical Ubuntu** from the list of available images, and finally click **Select Image** to confirm my choice.
|
||||
![Image Selection](/assets/images/posts-2020/OSbsiOw8E.png)
|
||||
![Image Selection](/images/posts-2020/OSbsiOw8E.png)
|
||||
|
||||
This will be an Ubuntu 20.04 image running on a `VM.Standard.E2.1.Micro` instance, which gets a single AMD EPYC 7551 CPU with 2.0GHz base frequency and 1GB of RAM. It's not much, but it's free - and it should do just fine for this project.
|
||||
|
||||
I can leave the rest of the options as their defaults, making sure that the instance will be allotted a public IPv4 address.
|
||||
![Other default selections](/assets/images/posts-2020/Ki0z1C3g.png)
|
||||
![Other default selections](/images/posts-2020/Ki0z1C3g.png)
|
||||
|
||||
Scrolling down a bit to the *Add SSH Keys* section, I leave the default **Generate a key pair for me** option selected, and click the very-important **Save Private Key** button to download the private key to my computer so that I'll be able to connect to the instance via SSH.
|
||||
![Download Private Key](/assets/images/posts-2020/dZkZUIFum.png)
|
||||
![Download Private Key](/images/posts-2020/dZkZUIFum.png)
|
||||
|
||||
Now I can finally click the blue **Create Instance** button at the bottom of the screen, and just wait a few minutes for it to start up. Once the status shows a big green "Running" square, I'm ready to connect! I'll copy the listed public IP and make a note of the default username (`ubuntu`). I can then plug the IP, username, and the private key I downloaded earlier into my SSH client (the [Secure Shell extension](https://chrome.google.com/webstore/detail/secure-shell/iodihamcpbpeioajjeobimgagajmlibd) for Google Chrome since I'm doing this from my Pixelbook), and log in to my new VM in The Cloud.
|
||||
![Logged in!](/assets/images/posts-2020/5PD1H7b1O.png)
|
||||
![Logged in!](/images/posts-2020/5PD1H7b1O.png)
|
||||
|
||||
### DNS setup
|
||||
According to [Oracle's docs](https://docs.oracle.com/en-us/iaas/Content/Network/Tasks/managingpublicIPs.htm), the public IP assigned to my instance is mine until I terminate the instance. It should even remain assigned if I stop or restart the instance, just as long as I don't delete the virtual NIC attached to it. So I'll skip the [`ddclient`-based dynamic DNS configuration I've used in the past](bitwarden-password-manager-self-hosted-on-free-google-cloud-instance#configure-dynamic-dns) and instead go straight to my registrar's DNS management portal and create a new `A` record for `matrix.bowdre.net` with the instance's public IP.
|
||||
|
@ -56,19 +56,19 @@ But first, I need to make sure that the traffic reaches the server to begin with
|
|||
Synapse listens on port `8008` for connections from messaging clients, and typically uses port `8448` for federation traffic from other Matrix servers. Rather than expose those ports directly, I'm going to put Synapse behind a reverse proxy on HTTPS port `443`. I'll also need to allow inbound traffic HTTP port `80` for ACME certificate challenges. I've got two firewalls to contend with: the Oracle Cloud one which blocks traffic from getting into my virtual cloud network, and the host firewall running inside the VM.
|
||||
|
||||
I'll tackle the cloud firewall first. From the page showing my instance details, I click on the subnet listed under the *Primary VNIC* heading:
|
||||
![Click on subnet](/assets/images/posts-2020/lBjINolYq.png)
|
||||
![Click on subnet](/images/posts-2020/lBjINolYq.png)
|
||||
|
||||
I then look in the *Security Lists* section and click on the Default Security List:
|
||||
![Click on default security list](/assets/images/posts-2020/nnQ7aQrpm.png)
|
||||
![Click on default security list](/images/posts-2020/nnQ7aQrpm.png)
|
||||
|
||||
The *Ingress Rules* section lists the existing inbound firewall exceptions, which by default is basically just SSH. I click on **Add Ingress Rules** to create a new one.
|
||||
![Ingress rules](/assets/images/posts-2020/dMPHvLHkH.png)
|
||||
![Ingress rules](/images/posts-2020/dMPHvLHkH.png)
|
||||
|
||||
I want this to apply to traffic from any source IP so I enter the CIDR `0.0.0.0/0`, and I enter the *Destination Port Range* as `80,443`. I also add a brief description and click **Add Ingress Rules**.
|
||||
![Adding an ingress rule](/assets/images/posts-2020/2fbKJc5Y6.png)
|
||||
![Adding an ingress rule](/images/posts-2020/2fbKJc5Y6.png)
|
||||
|
||||
Success! My new ingress rules appear at the bottom of the list.
|
||||
![New rules added](/assets/images/posts-2020/s5Y0rycng.png)
|
||||
![New rules added](/images/posts-2020/s5Y0rycng.png)
|
||||
|
||||
That gets traffic from the internet and to my instance, but the OS is still going to drop the traffic at its own firewall. I'll need to work with `iptables` to change that. (You typically use `ufw` to manage firewalls more easily on Ubuntu, but it isn't included on this minimal image and seemed to butt heads with `iptables` when I tried adding it. I eventually decided it was better to just interact with `iptables` directly). I'll start by listing the existing rules on the `INPUT` chain:
|
||||
```
|
||||
|
@ -247,10 +247,10 @@ Nmap done: 1 IP address (1 host up) scanned in 5.29 seconds
|
|||
```
|
||||
|
||||
Browsing to `https://matrix.bowdre.net` shows a blank page - but a valid and trusted certificate that I did absolutely nothing to configure!
|
||||
![Valid cert!](/assets/images/posts-2020/GHVqVOTAE.png)
|
||||
![Valid cert!](/images/posts-2020/GHVqVOTAE.png)
|
||||
|
||||
The `.well-known` URL also returns the expected JSON:
|
||||
![.well-known](/assets/images/posts-2020/6IRPHhr6u.png)
|
||||
![.well-known](/images/posts-2020/6IRPHhr6u.png)
|
||||
|
||||
And trying to hit anything else at `https://bowdre.net` brings me right back here.
|
||||
|
||||
|
@ -385,10 +385,10 @@ CONTAINER ID IMAGE COMMAND CREATED STATUS
|
|||
|
||||
### Testing
|
||||
And I can point my browser to `https://matrix.bowdre.net/_matrix/static/` and see the Matrix landing page:
|
||||
![Synapse is running!](/assets/images/posts-2020/-9apQIUci.png)
|
||||
![Synapse is running!](/images/posts-2020/-9apQIUci.png)
|
||||
|
||||
Before I start trying to connect with a client, I'm going to plug the server address in to the [Matrix Federation Tester](https://federationtester.matrix.org/) to make sure that other servers will be able to talk to it without any problems:
|
||||
![Good to go](/assets/images/posts-2020/xqOt3SydX.png)
|
||||
![Good to go](/images/posts-2020/xqOt3SydX.png)
|
||||
|
||||
And I can view the JSON report at the bottom of the page to confirm that it's correctly pulling my `.well-known` delegation:
|
||||
```json
|
||||
|
@ -400,7 +400,7 @@ And I can view the JSON report at the bottom of the page to confirm that it's co
|
|||
```
|
||||
|
||||
Now I can fire up my [Matrix client of choice](https://element.io/get-started)), specify my homeserver using its full FQDN, and [register](https://app.element.io/#/register) a new user account:
|
||||
![image.png](/assets/images/posts-2020/2xe34VJym.png)
|
||||
![image.png](/images/posts-2020/2xe34VJym.png)
|
||||
|
||||
(Once my account gets created, I go back to edit `/opt/matrix/synapse/data/homeserver.yaml` again and set `enable_registration: false`, then fire a `docker-compose restart` command to restart the Synapse container.)
|
||||
|
||||
|
|
|
@ -57,7 +57,7 @@ And the image embeds in the local copy of my posts now all look like this:
|
|||
|
||||
```markdown
|
||||
|
||||
![Clever image title](/assets/images/posts-2020/lhTnVwCO3.png)
|
||||
![Clever image title](/images/posts-2020/lhTnVwCO3.png)
|
||||
|
||||
```
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ title: Virtually Potato migrated to GitHub Pages!
|
|||
|
||||
After a bit less than a year of hosting my little technical blog with [Hashnode](https://hashnode.com), I spent a few days [migrating the content](script-to-update-image-embed-links-in-markdown-files) over to a new format hosted with [GitHub Pages](https://pages.github.com/).
|
||||
|
||||
![Party!](/assets/images/posts-2021/07/20210720-party.gif)
|
||||
![Party!](/images/posts-2021/07/20210720-party.gif)
|
||||
|
||||
### So long, Hashnode
|
||||
Hashnode served me well for the most part, but it was never really a great fit for me. Hashnode's focus is on developer content, and I'm not really a developer; I'm a sysadmin who occasionally develops solutions to solve my needs, but the code is never the end goal for me. As a result, I didn't spend much time in the (large and extremely active) community associated with Hashnode. It's a perfectly adequate blogging platform apart from the community, but it's really built to prop up that community aspect and I found that to be a bit limiting - particularly once Hashnode stopped letting you create tags to be used within your blog and instead only allowed you to choose from [the tags](https://hashnode.com/tags) already popular in the community. There are hundreds of tags for different coding languages, but not any that would cover the infrastructure virtualization or other technical projects that I tend to write about.
|
||||
|
@ -67,7 +67,7 @@ LiveReload address: http://0.0.0.0:35729
|
|||
```
|
||||
|
||||
And there it is!
|
||||
![Jekyll running locally on my Chromebook](/assets/images/posts-2021/07/20210720-jekyll.png)
|
||||
![Jekyll running locally on my Chromebook](/images/posts-2021/07/20210720-jekyll.png)
|
||||
|
||||
### `git push` time
|
||||
Alright that's enough rambling for now. I'm very happy with this new setup, particularly with the automatically-generated Table of Contents to help folks navigate some of my longer posts. (I can't believe I was having to piece those together manually in this blog's previous iteration!)
|
||||
|
|
|
@ -26,25 +26,25 @@ I didn't find a lot of documentation on how make this work, though, so here's ho
|
|||
### Adding the AD integration
|
||||
First things first: connecting vRA to AD. I do this by opening the Cloud Assembly interface, navigating to **Infrastructure > Connections > Integrations**, and clicking the **Add Integration** button. I'm then prompted to choose the integration type so I select the **Active Directory** one, and then I fill in the required information: a name (`Lab AD` seems appropriate), my domain controller as the LDAP host (`ldap://win01.lab.bowdre.net:389`), credentials for an account with sufficient privileges to create and delete computer objects (`lab\vra`), and finally the base DN to be used for the LDAP connection (`DC=lab,DC=bowdre,DC=net`).
|
||||
|
||||
![Creating the new AD integration](/assets/images/posts-2021/07/20210721-adding-ad-integration.png)
|
||||
![Creating the new AD integration](/images/posts-2021/07/20210721-adding-ad-integration.png)
|
||||
|
||||
Clicking the **Validate** button quickly confirms that I've entered the information correctly, and then I can click **Add** to save my work.
|
||||
|
||||
I'll then need to associate the integration with a project by opening the new integration, navigating to the **Projects** tab, and clicking **Add Project**. Now I select the project name from the dropdown, enter a valid relative OU (`OU=LAB`), and enable the options to let me override the relative OU and optionally skip AD actions from the cloud template.
|
||||
|
||||
![Project options for the AD integration](/assets/images/posts-2021/07/20210721-adding-project-to-integration.png)
|
||||
![Project options for the AD integration](/images/posts-2021/07/20210721-adding-project-to-integration.png)
|
||||
|
||||
|
||||
### Customization specs
|
||||
As mentioned above, I'll leverage the customization specs in vCenter to handle the actual joining of a computer to the domain. I maintain two specs for Windows deployments (one to join the domain and one to stay on the workgroup), and I can let the vRA cloud template decide which should be applied to a given deployment.
|
||||
|
||||
First, the workgroup spec, appropriately called `vra-win-workgroup`:
|
||||
![Workgroup spec](/assets/images/posts-2020/AzAna5Dda.png)
|
||||
![Workgroup spec](/images/posts-2020/AzAna5Dda.png)
|
||||
|
||||
It's about as basic as can be, including using DHCP for the network configuration (which doesn't really matter since the VM will eventually get a [static IP assigned from {php}IPAM](integrating-phpipam-with-vrealize-automation-8)).
|
||||
|
||||
`vra-win-domain` is basically the same, with one difference:
|
||||
![Domain spec](/assets/images/posts-2020/0ZYcORuiU.png)
|
||||
![Domain spec](/images/posts-2020/0ZYcORuiU.png)
|
||||
|
||||
Now to reference these specs from a cloud template...
|
||||
|
||||
|
@ -196,34 +196,34 @@ resources:
|
|||
```
|
||||
|
||||
The last thing I need to do before leaving the Cloud Assembly interface is smash that **Version** button at the bottom of the cloud template editor so that the changes will be visible to Service Broker:
|
||||
![New version](/assets/images/posts-2020/gOTzVawJE.png)
|
||||
![New version](/images/posts-2020/gOTzVawJE.png)
|
||||
|
||||
### Service Broker custom form updates
|
||||
... and the *first* thing to do after entering the Service Broker UI is to navigate to **Content Sources**, click on my Lab content source, and then click **Save & Import** to bring in the new version. I can then go to **Content**, click the little three-dot menu icon next to my `WindowsDemo` cloud template, and pick the **Customize form** option.
|
||||
|
||||
This bit will be pretty quick. I just need to look for the new `Join to AD domain` element on the left:
|
||||
![New element on left](/assets/images/posts-2020/Zz0D9wjYr.png)
|
||||
![New element on left](/images/posts-2020/Zz0D9wjYr.png)
|
||||
|
||||
And drag-and-drop it onto the canvas in the middle. I'll stick it directly after the `Network` field:
|
||||
![New element on the canvas](/assets/images/posts-2020/HHiShFlnT.png)
|
||||
![New element on the canvas](/images/posts-2020/HHiShFlnT.png)
|
||||
|
||||
I don't need to do anything else here since I'm not trying to do any fancy logic or anything, so I'll just hit **Save** and move on to...
|
||||
|
||||
### Testing
|
||||
Now to submit the request through Service Broker to see if this actually works:
|
||||
![Submitting the request](/assets/images/posts-2021/07/20210721-test-deploy-request.png)
|
||||
![Submitting the request](/images/posts-2021/07/20210721-test-deploy-request.png)
|
||||
|
||||
After a few minutes, I can go into Cloud Assembly and navigate to **Extensibility > Activity > Actions Runs** and look at the **Integration Runs** to see if the `ad_machine` action has completed yet.
|
||||
![Successful ad_machine action](/assets/images/posts-2021/07/20210721-successful-ad_machine.png)
|
||||
![Successful ad_machine action](/images/posts-2021/07/20210721-successful-ad_machine.png)
|
||||
|
||||
Looking good! And once the deployment completes, I can look at the VM in vCenter to see that it has registered a fully-qualified DNS name since it was automatically joined to the domain:
|
||||
![Domain-joined VM](/assets/images/posts-2021/07/20210721-vm-joined.png)
|
||||
![Domain-joined VM](/images/posts-2021/07/20210721-vm-joined.png)
|
||||
|
||||
I can also repeat the test for a VM deployed to the `DRE` site just to confirm that it gets correctly placed in that site's OU:
|
||||
![Another domain-joined VM](/assets/images/posts-2021/07/20210721-vm-joined-2.png)
|
||||
![Another domain-joined VM](/images/posts-2021/07/20210721-vm-joined-2.png)
|
||||
|
||||
And I'll fire off another deployment with the `adJoin` box *unchecked* to test that I can also skip the AD configuration completely:
|
||||
![VM not joined to the domain](/assets/images/posts-2021/07/20210721-vm-not-joined.png)
|
||||
![VM not joined to the domain](/images/posts-2021/07/20210721-vm-not-joined.png)
|
||||
|
||||
### Conclusion
|
||||
Confession time: I had actually started writing this posts weeks ago. At that point, my efforts to bend the built-in AD integration to my will had been fairly unsuccessful, so I was instead working on a custom vRO workflow to accomplish the same basic thing. I circled back to try the AD integration again after upgrading the vRA environment to the latest 8.4.2 release, and found that it actually works quite well now. So I happily scrapped my ~50 lines of messy vRO JavaScript in favor of *just three lines* of YAML in the cloud template.
|
||||
|
|
|
@ -16,7 +16,7 @@ I quickly realized that if I were hosting this pretty much anywhere *other* than
|
|||
|
||||
### Reviewing the theme-provided option
|
||||
The Jekyll theme I'm using ([Minimal Mistakes](https://github.com/mmistakes/minimal-mistakes)) comes with [built-in support](https://mmistakes.github.io/mm-github-pages-starter/categories/) for a [category archive page](series), which (like the [tags page](tags)) displays all the categorized posts on a single page. Links at the top will let you jump to an appropriate anchor to start viewing the selected category, but it's not really an elegant way to display a single category.
|
||||
![Posts by category](/assets/images/posts-2021/07/20210724-posts-by-category.png)
|
||||
![Posts by category](/images/posts-2021/07/20210724-posts-by-category.png)
|
||||
|
||||
It's a start, though, so I took a few minutes to check out how it's being generated. The category archive page lives at [`_pages/category-archive.md`](https://raw.githubusercontent.com/mmistakes/mm-github-pages-starter/master/_pages/category-archive.md):
|
||||
```markdown
|
||||
|
@ -152,7 +152,7 @@ header:
|
|||
You can see that this page is referencing the series layout I just created, and it's going to live at `http://localhost/series/vra8` - precisely where this series was on Hashnode. I've tagged it with the category I want to feature on this page, and specified that the posts will be sorted in reverse order so that anyone reading through the series will start at the beginning (I hear it's a very good place to start). I also added a teaser image which will be displayed when I link to the series from elsewhere. And I included a quick little italicized blurb to tell readers what the series is about.
|
||||
|
||||
Check it out [here](series/vra8):
|
||||
![vRA8 series](/assets/images/posts-2021/07/20210724-vra8-series.png)
|
||||
![vRA8 series](/images/posts-2021/07/20210724-vra8-series.png)
|
||||
|
||||
The other series pages will be basically the same, just without the reverse sort directive. Here's `_pages/series-tips.md`:
|
||||
```markdown
|
||||
|
@ -194,7 +194,7 @@ author_profile: true
|
|||
|
||||
### Fixing category links in posts
|
||||
The bottom of each post has a section which lists the tags and categories to which it belongs. Right now, those are still pointing to the category archive page (`/series/#vra8`) instead of the series feature pages I created (`/series/vra8`).
|
||||
![Old category link](/assets/images/posts-2021/07/20210724-old-category-link.png)
|
||||
![Old category link](/images/posts-2021/07/20210724-old-category-link.png)
|
||||
|
||||
That *works* but I'd rather it reference the fancy new pages I created. Tracking down where to make that change was a bit of a journey.
|
||||
|
||||
|
@ -259,7 +259,7 @@ To sell the series illusion even further, I can pop into [`_data/ui-text.yml`](h
|
|||
date_label : "Updated:"
|
||||
comments_label : "Leave a comment"
|
||||
```
|
||||
![Updated series link](/assets/images/posts-2021/07/20210724-new-series-link.png)
|
||||
![Updated series link](/images/posts-2021/07/20210724-new-series-link.png)
|
||||
|
||||
Much better!
|
||||
|
||||
|
@ -282,6 +282,6 @@ main:
|
|||
```
|
||||
|
||||
### All done!
|
||||
![Slick series navigation!](/assets/images/posts-2021/07/20210724-series-navigation.png)
|
||||
![Slick series navigation!](/images/posts-2021/07/20210724-series-navigation.png)
|
||||
|
||||
I set out to recreate the series setup that I had over at Hashnode, and I think I've accomplished that. More importantly, I've learned quite a bit more about how Jekyll works, and I'm already plotting further tweaks. For now, though, I think this is ready for a `git push`!
|
|
@ -228,13 +228,13 @@ resources:
|
|||
- tag: 'net:${input.network}'
|
||||
```
|
||||
I save the template, and then also hit the "Version" button to publish a new version to the catalog:
|
||||
![Releasing new version](/assets/images/posts-2021/08/20210803_new_template_version.png)
|
||||
![Releasing new version](/images/posts-2021/08/20210803_new_template_version.png)
|
||||
|
||||
#### Service Broker Custom Form
|
||||
I switch over to the Service Broker UI to update the custom form - but first I stop off at **Content & Policies > Content Sources**, select my Content Source, and hit the **Save & Import** button to force a sync of the cloud templates. I can then move on to the **Content & Policies > Content** section, click the 3-dot menu next to my template name, and select the option to **Customize Form**.
|
||||
|
||||
I'll just drag the new Schema Element called `Create static DNS record` from the Request Inputs panel and on to the form canvas. I'll drop it right below the `Join to AD domain` field:
|
||||
![Adding the field to the form](/assets/images/posts-2021/08/20210803_updating_custom_form.png)
|
||||
![Adding the field to the form](/images/posts-2021/08/20210803_updating_custom_form.png)
|
||||
|
||||
And then I'll hit the **Save** button so that my efforts are preserved.
|
||||
|
||||
|
@ -246,10 +246,10 @@ I will be adding the DNS action on to my existing "VM Post-Provisioning" workflo
|
|||
|
||||
#### Configuration Element
|
||||
But first, I'm going to go to the **Assets > Configurations** section of the Orchestrator UI and create a new Configuration Element to store variables related to the SSH host and DNS configuration.
|
||||
![Create a new configuration](/assets/images/posts-2020/Go3D-gemP.png)
|
||||
![Create a new configuration](/images/posts-2020/Go3D-gemP.png)
|
||||
|
||||
I'll call it `dnsConfig` and put it in my `CustomProvisioning` folder.
|
||||
![Giving it a name](/assets/images/posts-2020/fJswso9KH.png)
|
||||
![Giving it a name](/images/posts-2020/fJswso9KH.png)
|
||||
|
||||
And then I create the following variables:
|
||||
|
||||
|
@ -264,17 +264,17 @@ And then I create the following variables:
|
|||
`sshHost` is my new `win02` server that I'm going to connect to via SSH, and `sshUser` and `sshPass` should explain themselves. The `dnsServer` array will tell the script which DNS servers to try to create the record on; this will just be a single server in my lab, but I'm going to construct the script to support multiple servers in case one isn't reachable. And `supported domains` will be used to restrict where I'll be creating records; again, that's just a single domain in my lab, but I'm building this solution to account for the possibility where a VM might need to be deployed on a domain where I can't create a static record in this way so I want it to fail elegantly.
|
||||
|
||||
Here's what the new configuration element looks like:
|
||||
![Variables defined](/assets/images/posts-2020/a5gtUrQbc.png)
|
||||
![Variables defined](/images/posts-2020/a5gtUrQbc.png)
|
||||
|
||||
#### Workflow to create records
|
||||
I'll need to tell my workflow about the variables held in the `dnsConfig` Configuration Element I just created. I do that by opening the "VM Post-Provisioning" workflow in the vRO UI, clicking the **Edit** button, and then switching to the **Variables** tab. I create a variable for each member of `dnsConfig`, and enable the toggle to *Bind to configuration* so that I can select the corresponding item. It's important to make sure that the variable type exactly matches what's in the configuration element so that you'll be able to pick it!
|
||||
![Linking variable to config element](/assets/images/posts-2021/08/20210809_creating_bound_variable.png)
|
||||
![Linking variable to config element](/images/posts-2021/08/20210809_creating_bound_variable.png)
|
||||
|
||||
I repeat that for each of the remaining variables until all the members of `dnsConfig` are represented in the workflow:
|
||||
![Variables added](/assets/images/posts-2021/08/20210809_variables_added.png)
|
||||
![Variables added](/images/posts-2021/08/20210809_variables_added.png)
|
||||
|
||||
Now we're ready for the good part: inserting a new scriptable task into the workflow schema. I'll called it `Create DNS Record` and place it directly after the `Set Notes` task. For inputs, the task will take in `inputProperties (Properties)` as well as everything from that `dnsConfig` configuration element:
|
||||
![Task inputs](/assets/images/posts-2021/08/20210809_task_inputs.png)
|
||||
![Task inputs](/images/posts-2021/08/20210809_task_inputs.png)
|
||||
|
||||
And here's the JavaScript for the task:
|
||||
```js
|
||||
|
@ -325,16 +325,16 @@ Now I can just save the workflow, and I'm done! - with this part. Of course, bei
|
|||
|
||||
#### Workflow to delete records
|
||||
I haven't previously created any workflows that fire on deployment removal, so I'll create a new one and call it `VM Deprovisioning`:
|
||||
![New workflow](/assets/images/posts-2021/08/20210811_new_workflow.png)
|
||||
![New workflow](/images/posts-2021/08/20210811_new_workflow.png)
|
||||
|
||||
This workflow only needs a single input (`inputProperties (Properties)`) so it can receive information about the deployment from vRA:
|
||||
![Workflow input](/assets/images/posts-2021/08/20210811_inputproperties.png)
|
||||
![Workflow input](/images/posts-2021/08/20210811_inputproperties.png)
|
||||
|
||||
I'll also need to bind in the variables from the `dnsConfig` element as before:
|
||||
![Workflow variables](/assets/images/posts-2021/08/20210812_deprovision_variables.png)
|
||||
![Workflow variables](/images/posts-2021/08/20210812_deprovision_variables.png)
|
||||
|
||||
The schema will include a single scriptable task:
|
||||
![Delete DNS Record task](/assets/images/posts-2021/08/20210812_delete_dns_record_task.png)
|
||||
![Delete DNS Record task](/images/posts-2021/08/20210812_delete_dns_record_task.png)
|
||||
|
||||
And it's going to be *pretty damn similar* to the other one:
|
||||
|
||||
|
@ -383,14 +383,14 @@ if (staticDns == "true" && supportedDomains.indexOf(dnsDomain) >= 0) {
|
|||
```
|
||||
|
||||
Since this is a new workflow, I'll also need to head back to **Cloud Assembly > Extensibility > Subscriptions** and add a new subscription to call it when a deployment gets deleted. I'll call it "VM Deprovisioning", assign it to the "Compute Post Removal" Event Topic, and link it to my new "VM Deprovisioning" workflow. I *could* use the Condition option to filter this only for deployments which had a static DNS record created, but I'll later want to use this same workflow for other cleanup tasks so I'll just save it as is for now.
|
||||
![VM Deprovisioning subscription](/assets/images/posts-2021/08/20210812_deprovisioning_subscription.png)
|
||||
![VM Deprovisioning subscription](/images/posts-2021/08/20210812_deprovisioning_subscription.png)
|
||||
|
||||
### Testing
|
||||
Now I can (finally) fire off a quick deployment to see if all this mess actually works:
|
||||
![Test deploy request](/assets/images/posts-2021/08/20210812_test_deploy_request.png)
|
||||
![Test deploy request](/images/posts-2021/08/20210812_test_deploy_request.png)
|
||||
|
||||
Once the deployment completes, I go back into vRO, find the most recent item in the **Workflow Runs** view, and click over to the **Logs** tab to see how I did:
|
||||
![Workflow success!](/assets/images/posts-2021/08/20210813_workflow_success.png)
|
||||
![Workflow success!](/images/posts-2021/08/20210813_workflow_success.png)
|
||||
|
||||
And I can run a quick query to make sure that name actually resolves:
|
||||
```shell
|
||||
|
@ -401,10 +401,10 @@ And I can run a quick query to make sure that name actually resolves:
|
|||
It works!
|
||||
|
||||
Now to test the cleanup. For that, I'll head back to Service Broker, navigate to the **Deployments** tab, find my deployment, click the little three-dot menu button, and select the **Delete** option:
|
||||
![Deleting the deployment](/assets/images/posts-2021/08/20210813_delete_deployment.png)
|
||||
![Deleting the deployment](/images/posts-2021/08/20210813_delete_deployment.png)
|
||||
|
||||
Again, I'll check the **Workflow Runs** in vRO to see that the deprovisioning task completed successfully:
|
||||
![VM Deprovisioning workflow](/assets/images/posts-2021/08/20210813_workflow_deletion.png)
|
||||
![VM Deprovisioning workflow](/images/posts-2021/08/20210813_workflow_deletion.png)
|
||||
|
||||
And I can `dig` a little more to make sure the name doesn't resolve anymore:
|
||||
```shell
|
||||
|
|
|
@ -15,62 +15,62 @@ The Github instructions were pretty straight-forward but I did have to fumble th
|
|||
|
||||
#### Shortcut mapping
|
||||
Since the setup uses a simple Google Sheets document to map the shortcuts to the original long-form URLs, I started by going to [https://sheets.new](https://sheets.new) to create a new Sheet. I then just copied in the shorcuts and URLs I was already using in short.io. By the way, I learned on a previous attempt that this solution only works with lowercase shortcuts so I made sure to convert my `MixedCase` ones as I went.
|
||||
![Creating a new sheet](/assets/images/posts-2021/08/20210820_sheet.png)
|
||||
![Creating a new sheet](/images/posts-2021/08/20210820_sheet.png)
|
||||
|
||||
I then made a note of the Sheet ID from the URL; that's the bit that looks like `1SMeoyesCaGHRlYdGj9VyqD-qhXtab1jrcgHZ0irvNDs`. That will be needed later on.
|
||||
|
||||
#### Create a new GCP project
|
||||
I created a new project in my GCP account by going to [https://console.cloud.google.com/projectcreate](https://console.cloud.google.com/projectcreate) and entering a descriptive name.
|
||||
![Creating a new GCP project](/assets/images/posts-2021/08/20210820_create_project.png)
|
||||
![Creating a new GCP project](/images/posts-2021/08/20210820_create_project.png)
|
||||
|
||||
#### Deploy to GCP
|
||||
At this point, I was ready to actually kick off the deployment. Ahmet made this part exceptionally easy: just hit the **Run on Google Cloud** button from the [Github project page](https://github.com/ahmetb/sheets-url-shortener#setup). That opens up a Google Cloud Shell instance which prompts for authorization before it starts the deployment script.
|
||||
![Open in Cloud Shell prompt](/assets/images/posts-2021/08/20210820_open_in_cloud_shell.png)
|
||||
![Open in Cloud Shell prompt](/images/posts-2021/08/20210820_open_in_cloud_shell.png)
|
||||
|
||||
![Authorize Cloud Shell prompt](/assets/images/posts-2021/08/20210820_authorize_cloud_shell.png)
|
||||
![Authorize Cloud Shell prompt](/images/posts-2021/08/20210820_authorize_cloud_shell.png)
|
||||
|
||||
The script prompted me to select a project and a region, and then asked for the Sheet ID that I copied earlier.
|
||||
![Cloud Shell deployment](/assets/images/posts-2021/08/20210820_cloud_shell.png)
|
||||
![Cloud Shell deployment](/images/posts-2021/08/20210820_cloud_shell.png)
|
||||
|
||||
#### Grant access to the Sheet
|
||||
In order for the Cloud Run service to be able to see the URL mappings in the Sheet I needed to share the Sheet with the service account. That service account is found by going to [https://console.cloud.google.com/run](https://console.cloud.google.com/run), clicking on the new `sheets-url-shortener` service, and then viewing the **Permissions** tab. I'm interested in the one that's `############-computer@developer.gserviceaccount.com`.
|
||||
![Finding the service account](/assets/images/posts-2021/08/20210820_service_account.png)
|
||||
![Finding the service account](/images/posts-2021/08/20210820_service_account.png)
|
||||
|
||||
I then went back to the Sheet, hit the big **Share** button at the top, and shared the Sheet to the service account with *Viewer* access.
|
||||
![Sharing to the service account](/assets/images/posts-2021/08/20210820_share_with_svc_account.png)
|
||||
![Sharing to the service account](/images/posts-2021/08/20210820_share_with_svc_account.png)
|
||||
|
||||
#### Quick test
|
||||
Back in GCP land, the details page for the `sheets-url-shortener` Cloud Run service shows a gross-looking URL near the top: `https://sheets-url-shortener-vrw7x6wdzq-uc.a.run.app`. That doesn't do much for *shortening* my links, but it'll do just fine for a quick test. First, I pointed my browser straight to that listed URL:
|
||||
![Testing the web server](/assets/images/posts-2021/08/20210820_home_page.png)
|
||||
![Testing the web server](/images/posts-2021/08/20210820_home_page.png)
|
||||
|
||||
This at least tells me that the web server portion is working. Now to see if I can redirect to my [project car posts on Polywork](https://john.bowdre.net/?badges%5B%5D=Car+Nerd):
|
||||
![Testing a redirect](/assets/images/posts-2021/08/20210820_sheets_api_disabled.png)
|
||||
![Testing a redirect](/images/posts-2021/08/20210820_sheets_api_disabled.png)
|
||||
|
||||
Hmm, not quite. Luckily the error tells me exactly what I need to do...
|
||||
|
||||
#### Enable Sheets API
|
||||
I just needed to visit `https://console.developers.google.com/apis/api/sheets.googleapis.com/overview?project=############` to enable the Google Sheets API.
|
||||
![Enabling Sheets API](/assets/images/posts-2021/08/20210820_enable_sheets_api.png)
|
||||
![Enabling Sheets API](/images/posts-2021/08/20210820_enable_sheets_api.png)
|
||||
|
||||
Once that's done, I can try my redirect again - and, after a brief moment, it successfully sends me on to Polywork!
|
||||
![Successful redirect](/assets/images/posts-2021/08/20210820_successful_redirect.png)
|
||||
![Successful redirect](/images/posts-2021/08/20210820_successful_redirect.png)
|
||||
|
||||
#### Link custom domain
|
||||
The whole point of this project is to *shorten* URLs, but I haven't done that yet. I'll want to link in my `go.bowdre.net` domain to use that in place of the rather unwieldy `https://sheets-url-shortener-vrw7x6wdzq-uc.a.run.app`. I do that by going back to the [Cloud Run console](https://console.cloud.google.com/run) and selecting the option at the top to **Manage Custom Domains**.
|
||||
![Manage custom domains](/assets/images/posts-2021/08/20210820_manage_custom_domain.png)
|
||||
![Manage custom domains](/images/posts-2021/08/20210820_manage_custom_domain.png)
|
||||
|
||||
I can then use the **Add Mapping** button, select my `sheets-url-shortener` service, choose one of my verified domains (which I *think* are already verified since they're registered through Google Domains with the same account), and then specify the desired subdomain.
|
||||
![Adding a domain mapping](/assets/images/posts-2021/08/20210820_add_mapping_1.png)
|
||||
![Adding a domain mapping](/images/posts-2021/08/20210820_add_mapping_1.png)
|
||||
|
||||
The wizard then tells me exactly what record I need to create/update with my domain host:
|
||||
![CNAME details](/assets/images/posts-2021/08/20210820_add_mapping_2.png)
|
||||
![CNAME details](/images/posts-2021/08/20210820_add_mapping_2.png)
|
||||
|
||||
It took a while for the domain mapping to go live once I've updated the record.
|
||||
![Processing mapping...](/assets/images/posts-2021/08/20210820_domain_mapping.png)
|
||||
![Processing mapping...](/images/posts-2021/08/20210820_domain_mapping.png)
|
||||
|
||||
#### Final tests
|
||||
Once it did finally update, I was able to hit `https://go.bowdre.net` to get the error/landing page, complete with a valid SSL cert:
|
||||
![Successful error!](/assets/images/posts-2021/08/20210820_landing_page.png)
|
||||
![Successful error!](/images/posts-2021/08/20210820_landing_page.png)
|
||||
|
||||
And testing [go.bowdre.net/ghia](https://go.bowdre.net/ghia) works as well!
|
||||
|
||||
|
|
|
@ -247,25 +247,25 @@ resources:
|
|||
```
|
||||
|
||||
With the template sorted, I need to assign it a new version and release it to the catalog so that the changes will be visible to Service Broker:
|
||||
![Releasing a new version of a Cloud Assembly template](/assets/images/posts-2021/08/20210831_cloud_assembly_new_version.png)
|
||||
![Releasing a new version of a Cloud Assembly template](/images/posts-2021/08/20210831_cloud_assembly_new_version.png)
|
||||
|
||||
#### Service Broker custom form
|
||||
I now need to also make some updates to the custom form configuration in Service Broker so that the new fields will appear on the request form. First things first, though: after switching to the Service Broker UI, I go to **Content & Policies > Content Sources**, open the linked content source, and click the **Save & Import** button to force Service Broker to pull in the latest versions from Cloud Assembly.
|
||||
|
||||
I can then go to **Content**, click the three-dot menu next to my `WindowsDemo` item, and select the **Customize Form** option. I drag-and-drop the `System drive size` from the *Schema Elements* section onto the canvas, placing it directly below the existing `Resource Size` field.
|
||||
![Placing the system drive size field on the canvas](/assets/images/posts-2021/08/20210831_system_drive_size_placement.png)
|
||||
![Placing the system drive size field on the canvas](/images/posts-2021/08/20210831_system_drive_size_placement.png)
|
||||
|
||||
With the field selected, I use the **Properties** section to edit the label with a unit so that users will better understand what they're requesting.
|
||||
![System drive size label](/assets/images/posts-2021/08/20210831_system_drive_size_label.png)
|
||||
![System drive size label](/images/posts-2021/08/20210831_system_drive_size_label.png)
|
||||
|
||||
On the **Values** tab, I change the *Step* option to `5` so that we won't wind up with users requesting a disk size of `62.357 GB` or anything crazy like that.
|
||||
![System drive size step](/assets/images/posts-2021/08/20210831_system_drive_size_step.png)
|
||||
![System drive size step](/images/posts-2021/08/20210831_system_drive_size_step.png)
|
||||
|
||||
I'll drag-and-drop the `Administrators` field to the canvas, and put it right below the VM description:
|
||||
![Administrators field placement](/assets/images/posts-2021/08/20210831_administrators_placement.png)
|
||||
![Administrators field placement](/images/posts-2021/08/20210831_administrators_placement.png)
|
||||
|
||||
I only want this field to be visible if the VM is going to be joined to the AD domain, so I'll set the *Visibility* accordingly:
|
||||
![Administrators field visibility](/assets/images/posts-2021/08/20210831_administrators_visibility.png)
|
||||
![Administrators field visibility](/images/posts-2021/08/20210831_administrators_visibility.png)
|
||||
|
||||
That should be everything I need to add to the custom form so I'll be sure to hit that big **Save** button before moving on.
|
||||
|
||||
|
@ -279,21 +279,21 @@ From the vRA Cloud Assembly interface, I'll navigate to **Extensibility > Librar
|
|||
- `templatePassWinDomain`: for logging into VMs with the designated domain credentials.
|
||||
|
||||
I'll make sure to enable the *Encrypt the action constant value* toggle for each so they'll be protected.
|
||||
![Creating an action constant](/assets/images/posts-2021/09/20210901_create_action_constant.png)
|
||||
![Creating an action constant](/images/posts-2021/09/20210901_create_action_constant.png)
|
||||
|
||||
![Created action constants](/assets/images/posts-2021/09/20210901_action_constants.png)
|
||||
![Created action constants](/images/posts-2021/09/20210901_action_constants.png)
|
||||
|
||||
Once all those constants are created I can move on to the meat of this little project:
|
||||
|
||||
#### ABX Action
|
||||
I'll click back to **Extensibility > Library > Actions** and then **+ New Action**. I give the new action a clever title and description:
|
||||
![Create a new action](/assets/images/posts-2021/09/20210901_create_action.png)]
|
||||
![Create a new action](/images/posts-2021/09/20210901_create_action.png)]
|
||||
|
||||
I then hit the language dropdown near the top left and select to use `powershell` so that I can use those sweet, sweet PowerCLI cmdlets.
|
||||
![Language selection](/assets/images/posts-2021/09/20210901_action_select_language.png)
|
||||
![Language selection](/images/posts-2021/09/20210901_action_select_language.png)
|
||||
|
||||
And I'll pop over to the right side to map the Action Constants I created earlier so that I can reference them in the script I'm about to write:
|
||||
![Mapping constants in action](/assets/images/posts-2021/09/20210901_map_constants_to_action.png)
|
||||
![Mapping constants in action](/images/posts-2021/09/20210901_map_constants_to_action.png)
|
||||
|
||||
Now for The Script:
|
||||
```powershell
|
||||
|
@ -468,27 +468,27 @@ It wouldn't be hard to customize the script to perform different actions (or eve
|
|||
|
||||
#### Event subscription
|
||||
Before I can test the new action, I'll need to first add an extensibility subscription so that the ABX action will get called during the deployment. So I head to **Extensibility > Subscriptions** and click the **New Subscription** button.
|
||||
![Extensibility subscriptions](/assets/images/posts-2021/09/20210903_extensibility_subscriptions.png)
|
||||
![Extensibility subscriptions](/images/posts-2021/09/20210903_extensibility_subscriptions.png)
|
||||
|
||||
I'll be using this to call my new `configureGuest` action - so I'll name the subscription `Configure Guest`. I tie it to the `Compute Post Provision` event, and bind my action:
|
||||
![Creating the new subscription](/assets/images/posts-2021/09/20210903_new_subscription_1.png)
|
||||
![Creating the new subscription](/images/posts-2021/09/20210903_new_subscription_1.png)
|
||||
|
||||
I do have another subsciption on that event already, [`VM Post-Provisioning`](adding-vm-notes-and-custom-attributes-with-vra8#extensibility-subscription) which is used to modify the VM object with notes and custom attributes. I'd like to make sure that my work inside the guest happens after that other subscription is completed, so I'll enable blocking and give it a priority of `2`:
|
||||
![Adding blocking to Configure Guest](/assets/images/posts-2021/09/20210903_new_subscription_2.png)
|
||||
![Adding blocking to Configure Guest](/images/posts-2021/09/20210903_new_subscription_2.png)
|
||||
|
||||
After hitting the **Save** button, I go back to that other `VM Post-Provisioning` subscription, set it to enable blocking, and give it a priority of `1`:
|
||||
![Blocking VM Post-Provisioning](/assets/images/posts-2021/09/20210903_old_subscription_blocking.png)
|
||||
![Blocking VM Post-Provisioning](/images/posts-2021/09/20210903_old_subscription_blocking.png)
|
||||
|
||||
This will ensure that the new subscription fires after the older one completes, and that should avoid any conflicts between the two.
|
||||
|
||||
### Testing
|
||||
Alright, now let's see if it worked. I head into Service Broker to submit the deployment request:
|
||||
![Submitting the test deployment](/assets/images/posts-2021/09/20210903_request.png)
|
||||
![Submitting the test deployment](/images/posts-2021/09/20210903_request.png)
|
||||
|
||||
Note that I've set the disk size to 65GB (up from the default of 60), and I'm adding `lab\testy` as a local admin on the deployed system.
|
||||
|
||||
Once the deployment finishes, I can switch back to Cloud Assembly and check **Extensibility > Activity > Action Runs** and then click on the `configureGuest` run to see how it did.
|
||||
![Successful action run](/assets/images/posts-2021/09/20210903_action_run_success.png)
|
||||
![Successful action run](/images/posts-2021/09/20210903_action_run_success.png)
|
||||
|
||||
It worked!
|
||||
|
||||
|
@ -546,10 +546,10 @@ TaskPath TaskName
|
|||
```
|
||||
|
||||
So it *claims* to have successfully updated the VM tools, added `lab\testy` to the local `Administrators` group, extended the `C:` volume to fill the 65GB virtual disk, added firewall rules to permit remote access, and created a scheduled task to apply updates. I can open a console session to the VM to spot-check the results.
|
||||
![Verifying local admins](/assets/images/posts-2021/09/20210903_verify_local_admins.png)
|
||||
![Verifying local admins](/images/posts-2021/09/20210903_verify_local_admins.png)
|
||||
Yep, `testy` is an admin now!
|
||||
|
||||
![Verify disk size](/assets/images/posts-2021/09/20210903_verify_disk_size.png)
|
||||
![Verify disk size](/images/posts-2021/09/20210903_verify_disk_size.png)
|
||||
And `C:` fills the disk!
|
||||
|
||||
### Wrap-up
|
||||
|
|
|
@ -46,10 +46,10 @@ I started by logging into my Google Cloud account at https://console.cloud.googl
|
|||
| Boot Disk Size | 10 GB |
|
||||
| Boot Disk Image | Ubuntu 20.04 LTS |
|
||||
|
||||
![Instance creation](/assets/images/posts-2021/10/20211027_instance_creation.png)
|
||||
![Instance creation](/images/posts-2021/10/20211027_instance_creation.png)
|
||||
|
||||
The other defaults are fine, but I'll holding off on clicking the friendly blue "Create" button at the bottom and instead click to expand the **Networking, Disks, Security, Management, Sole-Tenancy** sections to tweak a few more things.
|
||||
![Instance creation advanced settings](/assets/images/posts-2021/10/20211028_instance_advanced_settings.png)
|
||||
![Instance creation advanced settings](/images/posts-2021/10/20211028_instance_advanced_settings.png)
|
||||
|
||||
##### Network Configuration
|
||||
Expanding the **Networking** section of the request form lets me add a new `wireguard` network tag, which will make it easier to target the instance with a firewall rule later. I also want to enable the _IP Forwarding_ option so that the instance will be able to do router-like things.
|
||||
|
@ -60,7 +60,7 @@ I can do that by clicking on the _Default_ network interface to expand the confi
|
|||
|
||||
Anyway, after switching to the cheaper Standard tier I can click on the **External IP** dropdown and select the option to _Create IP Address_. I give it the same name as my instance to make it easy to keep up with.
|
||||
|
||||
![Network configuration](/assets/images/posts-2021/10/20211027_network_settings.png)
|
||||
![Network configuration](/images/posts-2021/10/20211027_network_settings.png)
|
||||
|
||||
##### Security Configuration
|
||||
The **Security** section lets me go ahead and upload an SSH public key that I can then use for logging into the instance once it's running. Of course, that means I'll first need to generate a key pair for this purpose:
|
||||
|
@ -70,18 +70,18 @@ ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_wireguard
|
|||
|
||||
Okay, now that I've got my keys, I can click the **Add Item** button and paste in the contents of `~/.ssh/id_ed25519_wireguard.pub`.
|
||||
|
||||
![Security configuration](/assets/images/posts-2021/10/20211027_security_settings.png)
|
||||
![Security configuration](/images/posts-2021/10/20211027_security_settings.png)
|
||||
|
||||
And that's it for the pre-deploy configuration! Time to hit **Create** to kick it off.
|
||||
|
||||
![Do it!](/assets/images/posts-2021/10/20211027_creation_time.png)
|
||||
![Do it!](/images/posts-2021/10/20211027_creation_time.png)
|
||||
|
||||
The instance creation will take a couple of minutes but I can go ahead and get the firewall sorted while I wait.
|
||||
|
||||
#### Firewall
|
||||
Google Cloud's default firewall configuration will let me reach my new server via SSH without needing to configure anything, but I'll need to add a new rule to allow the WireGuard traffic. I do this by going to **VPC > Firewall** and clicking the button at the top to **[Create Firewall Rule](https://console.cloud.google.com/networking/firewalls/add)**. I give it a name (`allow-wireguard-ingress`), select the rule target by specifying the `wireguard` network tag I had added to the instance, and set the source range to `0.0.0.0/0`. I'm going to use the default WireGuard port so select the _udp:_ checkbox and enter `51820`.
|
||||
|
||||
![Firewall rule creation](/assets/images/posts-2021/10/20211027_firewall.png)
|
||||
![Firewall rule creation](/images/posts-2021/10/20211027_firewall.png)
|
||||
|
||||
I'll click **Create** and move on.
|
||||
|
||||
|
@ -404,7 +404,7 @@ peer: {CB_PUBLIC_KEY}
|
|||
```
|
||||
|
||||
And I can even access my homelab when not at home!
|
||||
![Remote access to my homelab!](/assets/images/posts-2021/10/20211028_remote_homelab.png)
|
||||
![Remote access to my homelab!](/images/posts-2021/10/20211028_remote_homelab.png)
|
||||
|
||||
#### Android Phone
|
||||
Being able to copy-and-paste the required public keys between the WireGuard app and the SSH session to the GCP instance made it relatively easy to set up the Chromebook, but things could be a bit trickier on a phone without that kind of access. So instead I will create the phone's configuration on the WireGuard server in the cloud, render that config file as a QR code, and simply scan that through the phone's WireGuard app to import the settings.
|
||||
|
@ -452,13 +452,13 @@ Back in the `clients/` directory, I can use `qrencode` to render the phone confi
|
|||
```sh
|
||||
qrencode -t ansiutf8 < phone1.conf
|
||||
```
|
||||
![QR code config](/assets/images/posts-2021/10/20211028_qrcode_config.png)
|
||||
![QR code config](/images/posts-2021/10/20211028_qrcode_config.png)
|
||||
|
||||
And then I just open the WireGuard app on my phone and use the **Scan from QR Code** option. After a successful scan, it'll prompt me to name the new tunnel, and then I should be able to connect right away.
|
||||
![Successful mobile connection](/assets/images/posts-2021/10/20211028_wireguard_mobile.png)
|
||||
![Successful mobile connection](/images/posts-2021/10/20211028_wireguard_mobile.png)
|
||||
|
||||
I can even access my vSphere lab environment - not that it offers a great mobile experience...
|
||||
![vSphere mobile sucks](/assets/images/posts-2021/10/20211028_mobile_vsphere_sucks.jpg)
|
||||
![vSphere mobile sucks](/images/posts-2021/10/20211028_mobile_vsphere_sucks.jpg)
|
||||
|
||||
Before moving on too much further, though, I'm going to clean up the keys and client config file that I generated on the GCP instance. It's not great hygiene to keep a private key stored on the same system it's used to access.
|
||||
|
||||
|
@ -476,7 +476,7 @@ Two quick pre-requisites first:
|
|||
On to the Tasker config. The only changes will be in the [VPN on Strange Wifi](auto-connect-to-protonvpn-on-untrusted-wifi-with-tasker#vpn-on-strange-wifi) profile. I'll remove the OpenVPN-related actions from the Enter and Exit tasks and replace them with the built-in **Tasker > Tasker Function WireGuard Set Tunnel** action.
|
||||
|
||||
For the Enter task, I'll set the tunnel status to `true` and specify the name of the tunnel as configured in the WireGuard app; the Exit task gets the status set to `false` to disable the tunnel. Both actions will be conditional upon the `%TRUSTED_WIFI` variable being unset.
|
||||
![Tasker setup](/assets/images/posts-2021/10/20211028_tasker_setup.png)
|
||||
![Tasker setup](/images/posts-2021/10/20211028_tasker_setup.png)
|
||||
|
||||
```
|
||||
Profile: VPN on Strange WiFi
|
||||
|
|
|
@ -16,7 +16,7 @@ I've been wanting to learn a bit more about [SaltStack Config](https://www.vmwar
|
|||
|
||||
### The Problem
|
||||
Unfortunately I ran into a problem immediately after the deployment completed:
|
||||
![403 error from SSC](/assets/images/posts-2021/11/20211105_ssc_403.png)
|
||||
![403 error from SSC](/images/posts-2021/11/20211105_ssc_403.png)
|
||||
|
||||
Instead of being redirected to the vIDM authentication screen, I get a 403 Forbidden error.
|
||||
|
||||
|
@ -58,7 +58,7 @@ I fumbled around for a bit and managed to get the required certs added to the sy
|
|||
|
||||
So here's what I did to get things working in my homelab:
|
||||
1. Point a browser to my vRA instance, click on the certificate error to view the certificate details, and then export the _CA_ certificate to a local file. (For a self-signed cert issued by LCM, this will likely be called something like `Automatically generated one-off CA authority for vRA`.)
|
||||
![Exporting the self-signed CA cert](/assets/images/posts-2021/11/20211105_export_selfsigned_ca.png)
|
||||
![Exporting the self-signed CA cert](/images/posts-2021/11/20211105_export_selfsigned_ca.png)
|
||||
2. Open the file in a text editor, and copy the contents into a new file on the SSC appliance. I used `~/vra.crt`.
|
||||
3. Append the certificate to the end of the system `ca-bundle.crt`:
|
||||
```sh
|
||||
|
@ -103,9 +103,9 @@ systemctl stop raas
|
|||
systemctl start raas
|
||||
```
|
||||
7. And then try to visit the SSC URL again. This time, it redirects successfully to vIDM:
|
||||
![Successful vIDM redirect](/assets/images/posts-2021/11/20211105_vidm_login.png)
|
||||
![Successful vIDM redirect](/images/posts-2021/11/20211105_vidm_login.png)
|
||||
8. Log in and get salty:
|
||||
![Get salty!](/assets/images/posts-2021/11/20211105_get_salty.png)
|
||||
![Get salty!](/images/posts-2021/11/20211105_get_salty.png)
|
||||
|
||||
The steps for doing this at work with an enterprise CA were pretty similar, with just slightly-different steps 1 and 2:
|
||||
1. Access the enterprise CA and download the CA chain, which came in `.p7b` format.
|
||||
|
|
Before Width: | Height: | Size: 66 KiB After Width: | Height: | Size: 66 KiB |
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 18 KiB |
Before Width: | Height: | Size: 63 KiB After Width: | Height: | Size: 63 KiB |
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
Before Width: | Height: | Size: 1.3 KiB After Width: | Height: | Size: 1.3 KiB |
Before Width: | Height: | Size: 2 KiB After Width: | Height: | Size: 2 KiB |
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 10 KiB |
Before Width: | Height: | Size: 2.1 KiB After Width: | Height: | Size: 2.1 KiB |
Before Width: | Height: | Size: 98 KiB After Width: | Height: | Size: 98 KiB |
Before Width: | Height: | Size: 119 KiB After Width: | Height: | Size: 119 KiB |
Before Width: | Height: | Size: 533 KiB After Width: | Height: | Size: 533 KiB |
Before Width: | Height: | Size: 542 KiB After Width: | Height: | Size: 542 KiB |
Before Width: | Height: | Size: 86 KiB After Width: | Height: | Size: 86 KiB |
Before Width: | Height: | Size: 44 KiB After Width: | Height: | Size: 44 KiB |
Before Width: | Height: | Size: 17 KiB After Width: | Height: | Size: 17 KiB |
Before Width: | Height: | Size: 917 KiB After Width: | Height: | Size: 917 KiB |
Before Width: | Height: | Size: 56 KiB After Width: | Height: | Size: 56 KiB |
Before Width: | Height: | Size: 90 KiB After Width: | Height: | Size: 90 KiB |
Before Width: | Height: | Size: 237 KiB After Width: | Height: | Size: 237 KiB |
Before Width: | Height: | Size: 211 KiB After Width: | Height: | Size: 211 KiB |
Before Width: | Height: | Size: 53 KiB After Width: | Height: | Size: 53 KiB |
Before Width: | Height: | Size: 37 KiB After Width: | Height: | Size: 37 KiB |
Before Width: | Height: | Size: 369 KiB After Width: | Height: | Size: 369 KiB |
Before Width: | Height: | Size: 93 KiB After Width: | Height: | Size: 93 KiB |
Before Width: | Height: | Size: 552 KiB After Width: | Height: | Size: 552 KiB |
Before Width: | Height: | Size: 275 KiB After Width: | Height: | Size: 275 KiB |
Before Width: | Height: | Size: 476 KiB After Width: | Height: | Size: 476 KiB |
Before Width: | Height: | Size: 226 KiB After Width: | Height: | Size: 226 KiB |
Before Width: | Height: | Size: 65 KiB After Width: | Height: | Size: 65 KiB |
Before Width: | Height: | Size: 498 KiB After Width: | Height: | Size: 498 KiB |
Before Width: | Height: | Size: 87 KiB After Width: | Height: | Size: 87 KiB |
Before Width: | Height: | Size: 338 KiB After Width: | Height: | Size: 338 KiB |
Before Width: | Height: | Size: 187 KiB After Width: | Height: | Size: 187 KiB |
Before Width: | Height: | Size: 493 KiB After Width: | Height: | Size: 493 KiB |
Before Width: | Height: | Size: 27 KiB After Width: | Height: | Size: 27 KiB |
Before Width: | Height: | Size: 185 KiB After Width: | Height: | Size: 185 KiB |
Before Width: | Height: | Size: 54 KiB After Width: | Height: | Size: 54 KiB |
Before Width: | Height: | Size: 306 KiB After Width: | Height: | Size: 306 KiB |
Before Width: | Height: | Size: 72 KiB After Width: | Height: | Size: 72 KiB |
Before Width: | Height: | Size: 364 KiB After Width: | Height: | Size: 364 KiB |
Before Width: | Height: | Size: 136 KiB After Width: | Height: | Size: 136 KiB |
Before Width: | Height: | Size: 134 KiB After Width: | Height: | Size: 134 KiB |
Before Width: | Height: | Size: 212 KiB After Width: | Height: | Size: 212 KiB |
Before Width: | Height: | Size: 98 KiB After Width: | Height: | Size: 98 KiB |
Before Width: | Height: | Size: 52 KiB After Width: | Height: | Size: 52 KiB |
Before Width: | Height: | Size: 210 KiB After Width: | Height: | Size: 210 KiB |
Before Width: | Height: | Size: 173 KiB After Width: | Height: | Size: 173 KiB |
Before Width: | Height: | Size: 79 KiB After Width: | Height: | Size: 79 KiB |
Before Width: | Height: | Size: 112 KiB After Width: | Height: | Size: 112 KiB |
Before Width: | Height: | Size: 416 KiB After Width: | Height: | Size: 416 KiB |
Before Width: | Height: | Size: 17 KiB After Width: | Height: | Size: 17 KiB |
Before Width: | Height: | Size: 62 KiB After Width: | Height: | Size: 62 KiB |
Before Width: | Height: | Size: 135 KiB After Width: | Height: | Size: 135 KiB |
Before Width: | Height: | Size: 752 KiB After Width: | Height: | Size: 752 KiB |
Before Width: | Height: | Size: 35 KiB After Width: | Height: | Size: 35 KiB |
Before Width: | Height: | Size: 68 KiB After Width: | Height: | Size: 68 KiB |
Before Width: | Height: | Size: 245 KiB After Width: | Height: | Size: 245 KiB |
Before Width: | Height: | Size: 57 KiB After Width: | Height: | Size: 57 KiB |
Before Width: | Height: | Size: 209 KiB After Width: | Height: | Size: 209 KiB |
Before Width: | Height: | Size: 138 KiB After Width: | Height: | Size: 138 KiB |
Before Width: | Height: | Size: 100 KiB After Width: | Height: | Size: 100 KiB |
Before Width: | Height: | Size: 136 KiB After Width: | Height: | Size: 136 KiB |
Before Width: | Height: | Size: 103 KiB After Width: | Height: | Size: 103 KiB |
Before Width: | Height: | Size: 730 KiB After Width: | Height: | Size: 730 KiB |
Before Width: | Height: | Size: 1.7 MiB After Width: | Height: | Size: 1.7 MiB |