mirror of
https://github.com/jbowdre/runtimeterror.git
synced 2024-11-21 14:32:19 +00:00
convert 'info' notices to 'note' notices because they're less ugly
This commit is contained in:
parent
aa82067d31
commit
039cc151d8
14 changed files with 132 additions and 132 deletions
|
@ -25,7 +25,7 @@ It was a bit trickier for Linux systems though. My Linux templates all use LVM t
|
|||
|
||||
I found a great script [here](https://github.com/alpacacode/Homebrewn-Scripts/blob/master/linux-scripts/partresize.sh) that helped with most of those operations, but it required the user to specify the physical and logical volumes. I modified it to auto-detect those, and here's what I came up with:
|
||||
|
||||
{{% notice info "MBR only" %}}
|
||||
{{% notice note "MBR only" %}}
|
||||
When I cobbled together this script I was primarily targeting the Enterprise Linux (RHEL, CentOS) systems that I work with in my environment, and those happened to have MBR partition tables. This script would need to be modified a bit to work with GPT partitions like you might find on Ubuntu.
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -21,7 +21,7 @@ I wanted to try out the self-hosted setup, and I discovered that the [official d
|
|||
|
||||
I then came across [this comment](https://www.reddit.com/r/Bitwarden/comments/8vmwwe/best_place_to_self_host_bitwarden/e1p2f71/) on Reddit which discussed in somewhat-vague terms the steps required to get BitWarden to run on the [free](https://cloud.google.com/free/docs/always-free-usage-limits#compute_name) `e2-micro` instance, and also introduced me to the community-built [vaultwarden](https://github.com/dani-garcia/vaultwarden) project which is specifically designed to run a BW-compatible server on resource-constrained hardware. So here are the steps I wound up taking to get this up and running.
|
||||
|
||||
{{% notice info "bitwarden_rs -> vaultwarden"%}}
|
||||
{{% notice note "bitwarden_rs -> vaultwarden"%}}
|
||||
When I originally wrote this post back in September 2018, the containerized BitWarden solution was called `bitwarden_rs`. The project [has since been renamed](https://github.com/dani-garcia/vaultwarden/discussions/1642) to `vaultwarden`, and I've since moved to the hosted version of BitWarden. I have attempted to update this article to account for the change but have not personally tested this lately. Good luck, dear reader!
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -196,7 +196,7 @@ vagrant destroy
|
|||
### Create a heavy VM, as a treat
|
||||
Having proven to myself that Vagrant does work on a Chromebook, let's see how it does with a slightly-heavier VM.... like [Windows 11](https://app.vagrantup.com/oopsme/boxes/windows11-22h2).
|
||||
|
||||
{{% notice info "Space Requirement" %}}
|
||||
{{% notice note "Space Requirement" %}}
|
||||
Windows 11 makes for a pretty hefty VM which will require significant storage space. My Chromebook's Linux environment ran out of storage space the first time I attempted to deploy this guy. Fortunately ChromeOS makes it easy to allocate more space to Linux (**Settings > Advanced > Developers > Linux development environment > Disk size**). You'll probably need at least 30GB free to provision this VM.
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -25,7 +25,7 @@ tags:
|
|||
- vpn
|
||||
comment: true # Disable comment if false.
|
||||
---
|
||||
{{% notice info "ESXi-ARM Fling v1.10 Update" %}}
|
||||
{{% notice note "ESXi-ARM Fling v1.10 Update" %}}
|
||||
On July 20, 2022, VMware released a [major update](https://blogs.vmware.com/arm/2022/07/20/1-10/) for the ESXi-ARM Fling. Among [other fixes and improvements](https://flings.vmware.com/esxi-arm-edition#changelog), this version enables **in-place ESXi upgrades** and [adds support for the Quartz64's **on-board NIC**](https://twitter.com/jmcwhatever/status/1549935971822706688). To update, I:
|
||||
1. Wrote the new ISO installer to another USB drive.
|
||||
2. Attached the installer drive to the USB hub, next to the existing ESXi drive.
|
||||
|
@ -225,7 +225,7 @@ The rest of the OVF deployment is basically just selecting the default options a
|
|||
#### Configuring Photon
|
||||
There are just a few things I'll want to configure on this VM before I move on to installing Tailscale, and I'll start out simply by logging in with the remote console.
|
||||
|
||||
{{% notice info "Default credentials" %}}
|
||||
{{% notice note "Default credentials" %}}
|
||||
The default password for Photon's `root` user is `changeme`. You'll be forced to change that at first login.
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -224,7 +224,7 @@ So now if I want to find out which images have been configured in vRA, I can ask
|
|||
```shell
|
||||
https GET vra.lab.bowdre.net/iaas/api/images "Authorization: Bearer $token"
|
||||
```
|
||||
{{% notice info "Request Items" %}}
|
||||
{{% notice note "Request Items" %}}
|
||||
Remember from above that HTTPie will automatically insert key/value pairs separated by a colon into the request header.
|
||||
{{% /notice %}}
|
||||
|
||||
|
@ -316,7 +316,7 @@ Now let's take what we've learned and see about implementing it as vRO actions.
|
|||
### vRealize Orchestrator actions
|
||||
My immediate goal for this exercise is create a set of vRealize Orchestrator actions which take in a zone/location identifier from the Cloud Assembly request and return a list of images which are available for deployment there. I'll start with some utility actions to do the heavy lifting, and then I'll be able to call them from other actions as things get more complicated/interesting. Before I can do that, though, I'll need to add the vRA instance as an HTTP REST endpoint in vRO.
|
||||
|
||||
{{% notice info "This post brought to you by..." %}}
|
||||
{{% notice note "This post brought to you by..." %}}
|
||||
A lot of what follows was borrowed *heavily* from a [very helpful post by Oktawiusz Poranski over at Automate Clouds](https://automateclouds.com/2021/vrealize-automation-8-rest-api-how-to/) so be sure to check out that site for more great tips on working with APIs!
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -126,7 +126,7 @@ run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save
|
|||
run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save
|
||||
```
|
||||
|
||||
{{% notice info "Cloud Firewall" %}}
|
||||
{{% notice note "Cloud Firewall" %}}
|
||||
Of course I will also need to create matching rules in the cloud firewall, but I'm going not going to detail [those steps](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#firewall-configuration) again here. And since I've now got Tailscale up and running I can remove the pre-created rule to allow SSH access through the cloud firewall.
|
||||
{{% /notice %}}
|
||||
|
||||
|
@ -153,7 +153,7 @@ When other users add their SSH public keys into Gitea's web UI, those will get a
|
|||
command="/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-1",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty <user pubkey>
|
||||
```
|
||||
|
||||
{{% notice info "Not just yet" %}}
|
||||
{{% notice note "Not just yet" %}}
|
||||
No users have added their keys to Gitea just yet so if you look at `/home/git/.ssh/authorized_keys` right now you won't see this extra line, but I wanted to go ahead and mention it to explain the next step. It'll show up later. I promise.
|
||||
{{% /notice %}}
|
||||
|
||||
|
@ -251,7 +251,7 @@ services:
|
|||
volumes:
|
||||
- ./postgres:/var/lib/postgresql/data
|
||||
```
|
||||
{{% notice info "Pin the PostgreSQL version" %}}
|
||||
{{% notice note "Pin the PostgreSQL version" %}}
|
||||
The format of PostgreSQL data changes with new releases, and that means that the data created by different major releases are not compatible. Unless you take steps to upgrade the data format, you'll have problems when a new major release of PostgreSQL arrives. Avoid the headache: pin this to a major version (as I did with `image: postgres:14` above) so you can upgrade on your terms.
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -123,7 +123,7 @@ Okay, we're finally ready to start entering our subnets at **Administration > IP
|
|||
I also enabled the options ~~*Mark as pool*~~, *Check hosts status*, *Discover new hosts*, and *Resolve DNS names*.
|
||||
![Subnet options](SR7oD0jsG.png)
|
||||
|
||||
{{% notice info "Update" %}}
|
||||
{{% notice note "Update" %}}
|
||||
Since releasing this integration, I've learned that phpIPAM intends for the `isPool` field to identify networks where the entire range (including the subnet and broadcast addresses) are available for assignment. As a result, I no longer recommend using that field. Instead, consider [creating a custom field](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/docs/custom_field.md) for tagging networks for vRA availability.
|
||||
{{% /notice %}}
|
||||
|
||||
|
@ -292,7 +292,7 @@ You'll notice that the form includes fields for Username, Password, and Hostname
|
|||
}
|
||||
}
|
||||
```
|
||||
{{% notice info "Update" %}}
|
||||
{{% notice note "Update" %}}
|
||||
Check out the [source on GitHub](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/src/main/resources/endpoint-schema.json) to see how I adjusted the schema to support custom field input.
|
||||
{{% /notice %}}
|
||||
|
||||
|
@ -427,7 +427,7 @@ subnets = subnets.json()['data']
|
|||
```
|
||||
I decided to add the extra `filter_by=isPool&filter_value=1` argument to the query so that it will only return subnets marked as a pool in phpIPAM. This way I can use phpIPAM for monitoring address usage on a much larger set of subnets while only presenting a handful of those to vRA.
|
||||
|
||||
{{% notice info "Update" %}}
|
||||
{{% notice note "Update" %}}
|
||||
I now filter for networks identified by the designated custom field like so:
|
||||
```python
|
||||
# Request list of subnets
|
||||
|
|
|
@ -1297,7 +1297,7 @@ Now that all the ducks are nicely lined up, let's give them some marching orders
|
|||
packer packer build -on-error=abort -force .
|
||||
```
|
||||
|
||||
{{% notice info "Flags" %}}
|
||||
{{% notice note "Flags" %}}
|
||||
The `-on-error=abort` option makes sure that the build will abort if any steps in the build fail, and `-force` tells Packer to delete any existing VMs/templates with the same name as the one I'm attempting to build.
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -207,7 +207,7 @@ This overlay will need to be inserted into the `pinniped-addon` secret which mea
|
|||
❯ base64 -w 0 pinniped-supervisor-svc-overlay.yaml
|
||||
I0AgbG9hZCgi[...]==
|
||||
```
|
||||
{{% notice info "Avoid newlines" %}}
|
||||
{{% notice note "Avoid newlines" %}}
|
||||
The `-w 0` / `--wrap=0` argument tells `base64` to *not* wrap the encoded lines after a certain number of characters. If you leave this off, the string will get a newline inserted every 76 characters, and those linebreaks would make the string a bit more tricky to work with. Avoid having to clean up the output afterwards by being more specific with the request up front!
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ tags:
|
|||
- powercli
|
||||
comment: true # Disable comment if false.
|
||||
---
|
||||
{{% notice info "Fix available" %}}
|
||||
{{% notice note "Fix available" %}}
|
||||
VMware has released a fix for this problem in the form of [ESXi 7.0 Update 3k](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3k-release-notes.html#resolvedissues):
|
||||
> If you already face the issue, after patching the host to ESXi 7.0 Update 3k, just power on the affected Windows Server 2022 VMs. After you patch a host to ESXi 7.0 Update 3k, you can migrate a running Windows Server 2022 VM from a host of version earlier than ESXi 7.0 Update 3k, install KB5022842, and the VM boots properly without any additional steps required.
|
||||
{{% /notice %}}
|
||||
|
|
|
@ -31,7 +31,7 @@ And then I discovered [Tailscale](https://tailscale.com/), which is built on the
|
|||
|
||||
There's already a great write-up (from the source!) on [How Tailscale Works](https://tailscale.com/blog/how-tailscale-works/), and it's really worth a read so I won't rehash it fully here. The tl;dr though is that Tailscale makes securely connecting remote systems incredibly easy, and it lets those systems connect with each other directly ("mesh") rather than needing traffic to go through a single VPN endpoint ("hub-and-spoke"). It uses a centralized coordination server to *coordinate* the complicated key exchanges needed for all members of a Tailscale network (a "[tailnet](https://tailscale.com/kb/1136/tailnet/)") to trust each other, and this removes the need for a human to manually edit configuration files on every existing device just to add a new one to the mix. Tailscale also leverages [magic :tada:](https://tailscale.com/blog/how-nat-traversal-works/) to allow Tailscale nodes to communicate with each other without having to punch holes in firewall configurations or forward ports or anything else tedious and messy. (And in case that the typical NAT traversal techniques don't work out, Tailscale created the Detoured Encrypted Routing Protocol (DERP[^derp]) to make sure Tailscale can still function seamlessly even on extremely restrictive networks that block UDP entirely or otherwise interfere with NAT traversal.)
|
||||
|
||||
{{% notice info "Not a VPN Service" %}}
|
||||
{{% notice note "Not a VPN Service" %}}
|
||||
It's a no-brainer solution for remote access, but it's important to note that Tailscale is not a VPN *service*; it won't allow you to internet anonymously or make it appear like you're connecting from a different country (unless you configure a Tailscale Exit Node hosted somewhere in The Cloud to do just that).
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -50,7 +50,7 @@ sudo systemctl start tailscaled
|
|||
|
||||
From that point, just [`sudo tailscale up`](https://tailscale.com/kb/1080/cli/#up) like normal.
|
||||
|
||||
{{% notice info "Updating Tailscale" %}}
|
||||
{{% notice note "Updating Tailscale" %}}
|
||||
Since Tailscale was installed outside of any package manager, it won't get updated automatically. When new versions are released you'll need to update it manually. To do that:
|
||||
1. Download and extract the new version.
|
||||
2. Install the `tailscale` and `tailscaled` binaries as described above (no need to install the service files again).
|
||||
|
|
|
@ -60,7 +60,7 @@ Name Vendor Acceptance Level Creation Time
|
|||
ESXi-8.0.0-20513097-standard VMware, Inc. PartnerSupported 2022-09-23T18:59:28 2022-09-23T18:59:28
|
||||
ESXi-8.0.0-20513097-no-tools VMware, Inc. PartnerSupported 2022-09-23T18:59:28 2022-09-23T18:59:28
|
||||
```
|
||||
{{% notice info "Absolute paths" %}}
|
||||
{{% notice note "Absolute paths" %}}
|
||||
When using the `esxcli` command to install software/updates, it's important to use absolute paths rather than relative paths. Otherwise you'll get errors and wind up chasing your tail for a while.
|
||||
{{% /notice %}}
|
||||
|
||||
|
|
|
@ -321,7 +321,7 @@ The final step of this workflow will be to replace the existing contents of `res
|
|||
|
||||
I'll do that with another scriptable task element, named `Apply new names`, which takes `inputProperties (Properties)` and `newNames (Array/string)` as inputs. It will return `resourceNames (Array/string)` as a *workflow output* back to vRA. vRA will see that `resourceNames` has changed and it will update the name of the deployed resource (the VM) accordingly.
|
||||
|
||||
{{% notice info "Binding a workflow output" %}}
|
||||
{{% notice note "Binding a workflow output" %}}
|
||||
To easily create a new workflow output and bind it to a task's output, click the task's **Add New** option like usual:
|
||||
![](add_new.png)
|
||||
Select **Output** at the top of the *New Variable* dialog and the complete the form with the other required details:
|
||||
|
|
Loading…
Reference in a new issue