mirror of
https://github.com/jbowdre/runtimeterror.git
synced 2024-12-24 03:42:19 +00:00
update posts for torchlight
This commit is contained in:
parent
ffd7be9de5
commit
7e5014050c
9 changed files with 284 additions and 274 deletions
|
@ -30,24 +30,24 @@ I settled on using [FreeCAD](https://www.freecadweb.org/) for parametric modelin
|
|||
|
||||
#### FreeCAD
|
||||
Installing FreeCAD is as easy as:
|
||||
```command
|
||||
sudo apt update
|
||||
```shell
|
||||
sudo apt update # [tl! .cmd:2]
|
||||
sudo apt install freecad
|
||||
```
|
||||
But launching `/usr/bin/freecad` caused me some weird graphical defects which rendered the application unusable. I found that I needed to pass the `LIBGL_DRI3_DISABLE=1` environment variable to eliminate these glitches:
|
||||
```command
|
||||
env 'LIBGL_DRI3_DISABLE=1' /usr/bin/freecad &
|
||||
```shell
|
||||
env 'LIBGL_DRI3_DISABLE=1' /usr/bin/freecad & # [tl! .cmd]
|
||||
```
|
||||
To avoid having to type that every time I wished to launch the app, I inserted this line at the bottom of my `~/.bashrc` file:
|
||||
```command
|
||||
```shell
|
||||
alias freecad="env 'LIBGL_DRI3_DISABLE=1' /usr/bin/freecad &"
|
||||
```
|
||||
To be able to start FreeCAD from the Chrome OS launcher with that environment variable intact, edit it into the `Exec` line of the `/usr/share/applications/freecad.desktop` file:
|
||||
```command
|
||||
sudo vi /usr/share/applications/freecad.desktop
|
||||
```shell
|
||||
sudo vi /usr/share/applications/freecad.desktop # [tl! .cmd]
|
||||
```
|
||||
|
||||
```cfg {linenos=true}
|
||||
```ini
|
||||
[Desktop Entry]
|
||||
Version=1.0
|
||||
Name=FreeCAD
|
||||
|
@ -56,7 +56,7 @@ Comment=Feature based Parametric Modeler
|
|||
Comment[de]=Feature-basierter parametrischer Modellierer
|
||||
GenericName=CAD Application
|
||||
GenericName[de]=CAD-Anwendung
|
||||
Exec=env LIBGL_DRI3_DISABLE=1 /usr/bin/freecad %F
|
||||
Exec=env LIBGL_DRI3_DISABLE=1 /usr/bin/freecad %F # [tl! focus]
|
||||
Path=/usr/lib/freecad
|
||||
Terminal=false
|
||||
Type=Application
|
||||
|
@ -75,16 +75,16 @@ Now that you've got a model, be sure to [export it as an STL mesh](https://wiki.
|
|||
Cura isn't available from the default repos so you'll need to download the AppImage from https://github.com/Ultimaker/Cura/releases/tag/4.7.1. You can do this in Chrome and then use the built-in File app to move the file into your 'My Files > Linux Files' directory. Feel free to put it in a subfolder if you want to keep things organized - I stash all my AppImages in `~/Applications/`.
|
||||
|
||||
To be able to actually execute the AppImage you'll need to adjust the permissions with 'chmod +x':
|
||||
```command
|
||||
chmod +x ~/Applications/Ultimaker_Cura-4.7.1.AppImage
|
||||
```shell
|
||||
chmod +x ~/Applications/Ultimaker_Cura-4.7.1.AppImage # [tl! .cmd]
|
||||
```
|
||||
You can then start up the app by calling the file directly:
|
||||
```command
|
||||
~/Applications/Ultimaker_Cura-4.7.1.AppImage &
|
||||
```shell
|
||||
~/Applications/Ultimaker_Cura-4.7.1.AppImage & # [tl! .cmd]
|
||||
```
|
||||
AppImages don't automatically appear in the Chrome OS launcher so you'll need to create its `.desktop` file. You can do this manually if you want, but I found it a lot easier to leverage `menulibre`:
|
||||
```command
|
||||
sudo apt update && sudo apt install menulibre
|
||||
```shell
|
||||
sudo apt update && sudo apt install menulibre # [tl! .cmd:2]
|
||||
menulibre
|
||||
```
|
||||
Just plug in the relevant details (you can grab the appropriate icon [here](https://github.com/Ultimaker/Cura/blob/master/icons/cura-128.png)), hit the filing cabinet Save icon, and you should then be able to search for Cura from the Chrome OS launcher.
|
||||
|
|
|
@ -21,7 +21,7 @@ I'll start this by adding a few new inputs to the cloud template in Cloud Assemb
|
|||
|
||||
I'm using a basic regex on the `poc_email` field to make sure that the user's input is *probably* a valid email address in the format `[some string]@[some string].[some string]`.
|
||||
|
||||
```yaml {linenos=true}
|
||||
```yaml
|
||||
inputs:
|
||||
[...]
|
||||
description:
|
||||
|
@ -36,8 +36,8 @@ inputs:
|
|||
poc_email:
|
||||
type: string
|
||||
title: Point of Contact Email
|
||||
default: jack.shephard@virtuallypotato.com
|
||||
pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$'
|
||||
default: username@example.com
|
||||
pattern: '^[^\s@]+@[^\s@]+\.[^\s@]+$' # [tl! highlight]
|
||||
ticket:
|
||||
type: string
|
||||
title: Ticket/Request Number
|
||||
|
@ -46,17 +46,18 @@ inputs:
|
|||
```
|
||||
|
||||
I'll also need to add these to the `resources` section of the template so that they will get passed along with the deployment properties.
|
||||
|
||||
![New resource properties](N7YllJkxS.png)
|
||||
|
||||
I'm actually going to combine the `poc_name` and `poc_email` fields into a single `poc` string.
|
||||
|
||||
```yaml {linenos=true}
|
||||
```yaml
|
||||
resources:
|
||||
Cloud_vSphere_Machine_1:
|
||||
type: Cloud.vSphere.Machine
|
||||
properties:
|
||||
<...>
|
||||
poc: '${input.poc_name + " (" + input.poc_email + ")"}'
|
||||
poc: '${input.poc_name + " (" + input.poc_email + ")"}' # [tl! highlight]
|
||||
ticket: '${input.ticket}'
|
||||
description: '${input.description}'
|
||||
<...>
|
||||
|
@ -80,7 +81,8 @@ The first thing this workflow needs to do is parse `inputProperties (Properties)
|
|||
![Get VM Object action](5ATk99aPW.png)
|
||||
|
||||
The script for this task is fairly straightforward:
|
||||
```js {linenos=true}
|
||||
```javascript
|
||||
// torchlight! {"lineNumbers": true}
|
||||
// JavaScript: Get VM Object
|
||||
// Inputs: inputProperties (Properties)
|
||||
// Outputs: vm (VC:VirtualMachine)
|
||||
|
@ -99,7 +101,8 @@ The first part of the script creates a new VM config spec, inserts the descripti
|
|||
|
||||
The second part uses a built-in action to set the `Point of Contact` and `Ticket` custom attributes accordingly.
|
||||
|
||||
```js {linenos=true}
|
||||
```javascript
|
||||
// torchlight! {"lineNumbers": true}
|
||||
// Javascript: Set Notes
|
||||
// Inputs: vm (VC:VirtualMachine), inputProperties (Properties)
|
||||
// Outputs: None
|
||||
|
@ -112,7 +115,7 @@ var spec = new VcVirtualMachineConfigSpec()
|
|||
spec.annotation = notes
|
||||
vm.reconfigVM_Task(spec)
|
||||
|
||||
System.getModule("com.vmware.library.vc.customattribute").setOrCreateCustomField(vm,"Point of Contact", poc)
|
||||
System.getModule("com.vmware.library.vc.customattribute").setOrCreateCustomField(vm,"Point of Contact", poc) // [tl! highlight:2]
|
||||
System.getModule("com.vmware.library.vc.customattribute").setOrCreateCustomField(vm,"Ticket", ticket)
|
||||
```
|
||||
|
||||
|
|
|
@ -78,7 +78,7 @@ chmod +x /usr/local/bin/docker-compose
|
|||
And then verify that it works:
|
||||
```shell
|
||||
docker-compose --version # [tl! .cmd_root]
|
||||
docker-compose version 1.29.2, build 5becea4c # [tl! .cmd_return]
|
||||
docker-compose version 1.29.2, build 5becea4c # [tl! .nocopy]
|
||||
```
|
||||
|
||||
I'll also want to enable and start Docker:
|
||||
|
@ -136,7 +136,7 @@ Then I can fire it up with `docker-compose up --detach`:
|
|||
|
||||
```shell
|
||||
docker-compose up --detach # [tl! .cmd_root focus:start]
|
||||
Creating network "adguard_default" with the default driver # [tl! .cmd_return:start]
|
||||
Creating network "adguard_default" with the default driver # [tl! .nocopy:start]
|
||||
Pulling adguard (adguard/adguardhome:latest)...
|
||||
latest: Pulling from adguard/adguardhome # [tl! focus:end]
|
||||
339de151aab4: Pull complete
|
||||
|
@ -145,7 +145,7 @@ latest: Pulling from adguard/adguardhome # [tl! focus:end]
|
|||
bfad96428d01: Pull complete
|
||||
Digest: sha256:de7d791b814560663fe95f9812fca2d6dd9d6507e4b1b29926cc7b4a08a676ad # [tl! focus:3]
|
||||
Status: Downloaded newer image for adguard/adguardhome:latest
|
||||
Creating adguard ... done # [tl! .cmd_return:end]
|
||||
Creating adguard ... done # [tl! .nocopy:end]
|
||||
```
|
||||
|
||||
|
||||
|
|
|
@ -29,7 +29,8 @@ I found a great script [here](https://github.com/alpacacode/Homebrewn-Scripts/bl
|
|||
When I cobbled together this script I was primarily targeting the Enterprise Linux (RHEL, CentOS) systems that I work with in my environment, and those happened to have MBR partition tables. This script would need to be modified a bit to work with GPT partitions like you might find on Ubuntu.
|
||||
{{% /notice %}}
|
||||
|
||||
```shell {linenos=true}
|
||||
```shell
|
||||
# torchlight! {"lineNumbers": true}
|
||||
#!/bin/bash
|
||||
# This will attempt to automatically detect the LVM logical volume where / is mounted and then
|
||||
# expand the underlying physical partition, LVM physical volume, LVM volume group, LVM logical
|
||||
|
|
|
@ -40,27 +40,29 @@ When I originally wrote this post back in September 2018, the containerized BitW
|
|||
1. Log in to the [Google Domain admin portal](https://domains.google.com/registrar) and [create a new Dynamic DNS record](https://domains.google.com/registrar). This will provide a username and password specific for that record.
|
||||
2. Log in to the GCE instance and run `sudo apt-get update` followed by `sudo apt-get install ddclient`. Part of the install process prompts you to configure things... just accept the defaults and move on.
|
||||
3. Edit the `ddclient` config file to look like this, substituting the username, password, and FDQN from Google Domains:
|
||||
```command
|
||||
sudo vim /etc/ddclient.conf
|
||||
```shell
|
||||
sudo vim /etc/ddclient.conf # [tl! .cmd]
|
||||
```
|
||||
|
||||
```cfg {linenos=true,hl_lines=["10-12"]}
|
||||
# Configuration file for ddclient generated by debconf
|
||||
#
|
||||
# /etc/ddclient.conf
|
||||
```ini
|
||||
# torchlight! {"lineNumbers": true}
|
||||
# Configuration file for ddclient generated by debconf
|
||||
#
|
||||
# /etc/ddclient.conf
|
||||
|
||||
protocol=googledomains,
|
||||
ssl=yes,
|
||||
syslog=yes,
|
||||
use=web,
|
||||
server=domains.google.com,
|
||||
login='[USERNAME]',
|
||||
password='[PASSWORD]',
|
||||
[FQDN]
|
||||
protocol=googledomains,
|
||||
ssl=yes,
|
||||
syslog=yes,
|
||||
use=web,
|
||||
server=domains.google.com,
|
||||
login='[USERNAME]', # [tl! highlight:3]
|
||||
password='[PASSWORD]',
|
||||
[FQDN]
|
||||
```
|
||||
4. `sudo vi /etc/default/ddclient` and make sure that `run_daemon="true"`:
|
||||
|
||||
```cfg {linenos=true,hl_lines=16}
|
||||
```ini
|
||||
# torchlight! {"lineNumbers": true}
|
||||
# Configuration for ddclient scripts
|
||||
# generated from debconf on Sat Sep 8 21:58:02 UTC 2018
|
||||
#
|
||||
|
@ -74,7 +76,7 @@ run_dhclient="false"
|
|||
# established. This might be useful, if you are using dial-on-demand.
|
||||
run_ipup="false"
|
||||
|
||||
# Set to "true" if ddclient should run in daemon mode
|
||||
# Set to "true" if ddclient should run in daemon mode [tl! focus:3]
|
||||
# If this is changed to true, run_ipup and run_dhclient must be set to false.
|
||||
run_daemon="true"
|
||||
|
||||
|
@ -83,8 +85,8 @@ run_daemon="true"
|
|||
daemon_interval="300"
|
||||
```
|
||||
5. Restart the `ddclient` service - twice for good measure (daemon mode only gets activated on the second go *because reasons*):
|
||||
```command
|
||||
sudo systemctl restart ddclient
|
||||
```shell
|
||||
sudo systemctl restart ddclient # [tl! .cmd:2]
|
||||
sudo systemctl restart ddclient
|
||||
```
|
||||
6. After a few moments, refresh the Google Domains page to verify that your instance's external IP address is showing up on the new DDNS record.
|
||||
|
@ -92,12 +94,12 @@ sudo systemctl restart ddclient
|
|||
### Install Docker
|
||||
*Steps taken from [here](https://docs.docker.com/install/linux/docker-ce/debian/).*
|
||||
1. Update `apt` package index:
|
||||
```command
|
||||
sudo apt-get update
|
||||
```shell
|
||||
sudo apt-get update # [tl! .cmd]
|
||||
```
|
||||
2. Install package management prereqs:
|
||||
```command-session
|
||||
sudo apt-get install \
|
||||
```shell
|
||||
sudo apt-get install \ # [tl! .cmd]
|
||||
apt-transport-https \
|
||||
ca-certificates \
|
||||
curl \
|
||||
|
@ -105,47 +107,47 @@ sudo apt-get install \
|
|||
software-properties-common
|
||||
```
|
||||
3. Add Docker GPG key:
|
||||
```command
|
||||
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
|
||||
```shell
|
||||
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - # [tl! .cmd]
|
||||
```
|
||||
4. Add the Docker repo:
|
||||
```command-session
|
||||
sudo add-apt-repository \
|
||||
```shell
|
||||
sudo add-apt-repository \ # [tl! .cmd]
|
||||
"deb [arch=amd64] https://download.docker.com/linux/debian \
|
||||
$(lsb_release -cs) \
|
||||
stable"
|
||||
```
|
||||
5. Update apt index again:
|
||||
```command
|
||||
sudo apt-get update
|
||||
```shell
|
||||
sudo apt-get update # [tl! .cmd]
|
||||
```
|
||||
6. Install Docker:
|
||||
```command
|
||||
sudo apt-get install docker-ce
|
||||
```shell
|
||||
sudo apt-get install docker-ce # [tl! .cmd]
|
||||
```
|
||||
|
||||
### Install Certbot and generate SSL cert
|
||||
*Steps taken from [here](https://certbot.eff.org/instructions?ws=other&os=debianbuster).*
|
||||
1. Install Certbot:
|
||||
```command
|
||||
sudo apt-get install certbot
|
||||
```shell
|
||||
sudo apt-get install certbot # [tl! .cmd]
|
||||
```
|
||||
2. Generate certificate:
|
||||
```command
|
||||
sudo certbot certonly --standalone -d [FQDN]
|
||||
```shell
|
||||
sudo certbot certonly --standalone -d ${FQDN} # [tl! .cmd]
|
||||
```
|
||||
3. Create a directory to store the new certificates and copy them there:
|
||||
```command
|
||||
sudo mkdir -p /ssl/keys/
|
||||
sudo cp -p /etc/letsencrypt/live/[FQDN]/fullchain.pem /ssl/keys/
|
||||
sudo cp -p /etc/letsencrypt/live/[FQDN]/privkey.pem /ssl/keys/
|
||||
```shell
|
||||
sudo mkdir -p /ssl/keys/ # [tl! .cmd:3]
|
||||
sudo cp -p /etc/letsencrypt/live/${FQDN}/fullchain.pem /ssl/keys/
|
||||
sudo cp -p /etc/letsencrypt/live/${FQDN}/privkey.pem /ssl/keys/
|
||||
```
|
||||
|
||||
### Set up vaultwarden
|
||||
*Using the container image available [here](https://github.com/dani-garcia/vaultwarden).*
|
||||
1. Let's just get it up and running first:
|
||||
```command-session
|
||||
sudo docker run -d --name vaultwarden \
|
||||
```shell
|
||||
sudo docker run -d --name vaultwarden \ # [tl! .cmd]
|
||||
-e ROCKET_TLS={certs='"/ssl/fullchain.pem", key="/ssl/privkey.pem"}' \
|
||||
-e ROCKET_PORT='8000' \
|
||||
-v /ssl/keys/:/ssl/ \
|
||||
|
@ -157,7 +159,7 @@ sudo docker run -d --name vaultwarden \
|
|||
2. At this point you should be able to point your web browser at `https://[FQDN]` and see the BitWarden login screen. Click on the Create button and set up a new account. Log in, look around, add some passwords, etc. Everything should basically work just fine.
|
||||
3. Unless you want to host passwords for all of the Internet you'll probably want to disable signups at some point by adding the `env` option `SIGNUPS_ALLOWED=false`. And you'll need to set `DOMAIN=https://[FQDN]` if you want to use U2F authentication:
|
||||
```shell
|
||||
sudo docker stop vaultwarden
|
||||
sudo docker stop vaultwarden # [tl! .cmd:2]
|
||||
sudo docker rm vaultwarden
|
||||
sudo docker run -d --name vaultwarden \
|
||||
-e ROCKET_TLS={certs='"/ssl/fullchain.pem",key="/ssl/privkey.pem"'} \
|
||||
|
@ -174,11 +176,12 @@ sudo docker run -d --name vaultwarden \
|
|||
### Install vaultwarden as a service
|
||||
*So we don't have to keep manually firing this thing off.*
|
||||
1. Create a script at `/usr/local/bin/start-vaultwarden.sh` to stop, remove, update, and (re)start the `vaultwarden` container:
|
||||
```command
|
||||
sudo vim /usr/local/bin/start-vaultwarden.sh
|
||||
```shell
|
||||
sudo vim /usr/local/bin/start-vaultwarden.sh # [tl! .cmd]
|
||||
```
|
||||
|
||||
```shell
|
||||
# torchlight! {"lineNumbers": true}
|
||||
#!/bin/bash
|
||||
|
||||
docker stop vaultwarden
|
||||
|
@ -189,7 +192,7 @@ docker run -d --name vaultwarden \
|
|||
-e ROCKET_TLS={certs='"/ssl/fullchain.pem",key="/ssl/privkey.pem"'} \
|
||||
-e ROCKET_PORT='8000' \
|
||||
-e SIGNUPS_ALLOWED=false \
|
||||
-e DOMAIN=https://[FQDN] \
|
||||
-e DOMAIN=https://${FQDN} \
|
||||
-v /ssl/keys/:/ssl/ \
|
||||
-v /bw-data/:/data/ \
|
||||
-v /icon_cache/ \
|
||||
|
@ -197,43 +200,43 @@ docker run -d --name vaultwarden \
|
|||
vaultwarden/server:latest
|
||||
```
|
||||
|
||||
```command
|
||||
sudo chmod 744 /usr/local/bin/start-vaultwarden.sh
|
||||
```shell
|
||||
sudo chmod 744 /usr/local/bin/start-vaultwarden.sh # [tl! .cmd]
|
||||
```
|
||||
2. And add it as a `systemd` service:
|
||||
```command
|
||||
sudo vim /etc/systemd/system/vaultwarden.service
|
||||
```shell
|
||||
sudo vim /etc/systemd/system/vaultwarden.service # [tl! .cmd]
|
||||
```
|
||||
|
||||
```cfg
|
||||
[Unit]
|
||||
Description=BitWarden container
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
```ini
|
||||
[Unit]
|
||||
Description=BitWarden container
|
||||
Requires=docker.service
|
||||
After=docker.service
|
||||
|
||||
[Service]
|
||||
Restart=always
|
||||
ExecStart=/usr/local/bin/vaultwarden-start.sh
|
||||
ExecStop=/usr/bin/docker stop vaultwarden
|
||||
[Service]
|
||||
Restart=always
|
||||
ExecStart=/usr/local/bin/vaultwarden-start.sh # [tl! highlight]
|
||||
ExecStop=/usr/bin/docker stop vaultwarden
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
```
|
||||
|
||||
```command
|
||||
sudo chmod 644 /etc/systemd/system/vaultwarden.service
|
||||
```shell
|
||||
sudo chmod 644 /etc/systemd/system/vaultwarden.service # [tl! .cmd]
|
||||
```
|
||||
3. Try it out:
|
||||
```command
|
||||
sudo systemctl start vaultwarden
|
||||
```shell
|
||||
sudo systemctl start vaultwarden # [tl! .cmd]
|
||||
```
|
||||
|
||||
```command-session
|
||||
sudo systemctl status vaultwarden
|
||||
● bitwarden.service - BitWarden container
|
||||
```shell
|
||||
sudo systemctl status vaultwarden # [tl! .cmd focus:start]
|
||||
● bitwarden.service - BitWarden container # [tl! .nocopy:start]
|
||||
Loaded: loaded (/etc/systemd/system/vaultwarden.service; enabled; vendor preset: enabled)
|
||||
Active: deactivating (stop) since Sun 2018-09-09 03:43:20 UTC; 1s ago
|
||||
Process: 13104 ExecStart=/usr/local/bin/bitwarden-start.sh (code=exited, status=0/SUCCESS)
|
||||
Process: 13104 ExecStart=/usr/local/bin/bitwarden-start.sh (code=exited, status=0/SUCCESS) # [tl! focus:end]
|
||||
Main PID: 13104 (code=exited, status=0/SUCCESS); Control PID: 13229 (docker)
|
||||
Tasks: 5 (limit: 4915)
|
||||
Memory: 9.7M
|
||||
|
@ -243,7 +246,7 @@ sudo systemctl status vaultwarden
|
|||
└─13229 /usr/bin/docker stop vaultwarden
|
||||
|
||||
Sep 09 03:43:20 vaultwarden vaultwarden-start.sh[13104]: Status: Image is up to date for vaultwarden/server:latest
|
||||
Sep 09 03:43:20 vaultwarden vaultwarden-start.sh[13104]: ace64ca5294eee7e21be764ea1af9e328e944658b4335ce8721b99a33061d645
|
||||
Sep 09 03:43:20 vaultwarden vaultwarden-start.sh[13104]: ace64ca5294eee7e21be764ea1af9e328e944658b4335ce8721b99a33061d645 # [tl! .nocopy:end]
|
||||
```
|
||||
|
||||
### Conclusion
|
||||
|
|
|
@ -37,8 +37,8 @@ Some networks have masks in the name, some don't; and some use an underscore (`_
|
|||
As long as the dvPortGroup names stick to this format I can parse the name to come up with a description as well as the IP space of the network. The dvPortGroup also carries information about the associated VLAN, which is useful information to have. And I can easily export this information with a simple PowerCLI query:
|
||||
|
||||
```powershell
|
||||
PS /home/john> get-vdportgroup | select Name, VlanConfiguration
|
||||
|
||||
get-vdportgroup | select Name, VlanConfiguration # [tl! .cmd_pwsh]
|
||||
# [tl! .nocopy:start]
|
||||
Name VlanConfiguration
|
||||
---- -----------------
|
||||
MGT-Home 192.168.1.0
|
||||
|
@ -50,15 +50,15 @@ DRE-Servers 172.16.50.0 VLAN 1650
|
|||
DRE-Servers 172.16.60.x VLAN 1660
|
||||
VPOT8-Mgmt 172.20.10.0/27 VLAN 20
|
||||
VPOT8-Servers 172.20.10.32/27 VLAN 30
|
||||
VPOT8-Servers 172.20.10.64_26 VLAN 40
|
||||
VPOT8-Servers 172.20.10.64_26 VLAN 40 # [tl! .nocopy:end]
|
||||
```
|
||||
|
||||
In my [homelab](/vmware-home-lab-on-intel-nuc-9/), I only have a single vCenter. In production, we've got a handful of vCenters, and each manages the hosts in a given region. So I can use information about which vCenter hosts a dvPortGroup to figure out which region a network is in. When I import this data into phpIPAM, I can use the vCenter name to assign [remote scan agents](https://github.com/jbowdre/phpipam-agent-docker) to networks based on the region that they're in. I can also grab information about which virtual datacenter a dvPortGroup lives in, which I'll use for grouping networks into sites or sections.
|
||||
|
||||
The vCenter can be found in the `Uid` property returned by `get-vdportgroup`:
|
||||
```powershell
|
||||
PS /home/john> get-vdportgroup | select Name, VlanConfiguration, Datacenter, Uid
|
||||
|
||||
get-vdportgroup | select Name, VlanConfiguration, Datacenter, Uid # [tl! .cmd_pwsh]
|
||||
# [tl! .nocopy:start]
|
||||
Name VlanConfiguration Datacenter Uid
|
||||
---- ----------------- ---------- ---
|
||||
MGT-Home 192.168.1.0 Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-27015/
|
||||
|
@ -70,13 +70,14 @@ DRE-Servers 172.16.50.0 VLAN 1650 Lab /VIServer=lab\john@vcsa.
|
|||
DRE-Servers 172.16.60.x VLAN 1660 Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-28014/
|
||||
VPOT8-Mgmt 172.20.10.0/… VLAN 20 Other Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-35018/
|
||||
VPOT8-Servers 172.20.10… VLAN 30 Other Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-35019/
|
||||
VPOT8-Servers 172.20.10… VLAN 40 Other Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-35020/
|
||||
VPOT8-Servers 172.20.10… VLAN 40 Other Lab /VIServer=lab\john@vcsa.lab.bowdre.net:443/DistributedPortgroup=DistributedVirtualPortgroup-dvportgroup-35020/ # [tl! .nocopy:end]
|
||||
```
|
||||
|
||||
It's not pretty, but it'll do the trick. All that's left is to export this data into a handy-dandy CSV-formatted file that I can easily parse for import:
|
||||
|
||||
```powershell
|
||||
get-vdportgroup | select Name, VlanConfiguration, Datacenter, Uid | export-csv -NoTypeInformation ./networks.csv
|
||||
get-vdportgroup | select Name, VlanConfiguration, Datacenter, Uid ` # [tl! .cmd_pwsh]
|
||||
| export-csv -NoTypeInformation ./networks.csv
|
||||
```
|
||||
![My networks.csv export, including the networks which don't match the naming criteria and will be skipped by the import process.](networks.csv.png)
|
||||
|
||||
|
@ -96,7 +97,8 @@ I'm also going to head in to **Administration > IP Related Management > Sections
|
|||
### Script time
|
||||
Well that's enough prep work; now it's time for the Python3 [script](https://github.com/jbowdre/misc-scripts/blob/main/Python/phpipam-bulk-import.py):
|
||||
|
||||
```python {linenos=true}
|
||||
```python
|
||||
# torchlight! {"lineNumbers": true}
|
||||
# The latest version of this script can be found on Github:
|
||||
# https://github.com/jbowdre/misc-scripts/blob/main/Python/phpipam-bulk-import.py
|
||||
|
||||
|
@ -478,8 +480,8 @@ if __name__ == "__main__":
|
|||
```
|
||||
|
||||
I'll run it and provide the path to the network export CSV file:
|
||||
```command
|
||||
python3 phpipam-bulk-import.py ~/networks.csv
|
||||
```shell
|
||||
python3 phpipam-bulk-import.py ~/networks.csv # [tl! .cmd]
|
||||
```
|
||||
|
||||
The script will print out a little descriptive bit about what sort of networks it's going to try to import and then will straight away start processing the file to identify the networks, vCenters, VLANs, and datacenters which will be imported:
|
||||
|
@ -489,16 +491,19 @@ Importing networks from /home/john/networks.csv...
|
|||
Processed 17 lines and found:
|
||||
|
||||
- 10 networks:
|
||||
['BOW-Servers 172.16.20.0', 'BOW-Servers 172.16.30.0', 'BOW-Servers 172.16.40.0', 'DRE-Servers 172.16.50.0', 'DRE-Servers 172.16.60.x', 'MGT-Home 192.168.1.0', 'MGT-Servers 172.16.10.0', 'VPOT8-Mgmt 172.20.10.0/27', 'VPOT8-Servers 172.20.10.32/27', 'VPOT8-Servers 172.20.10.64_26']
|
||||
['BOW-Servers 172.16.20.0', 'BOW-Servers 172.16.30.0', 'BOW-Servers 172.16.40.0',
|
||||
'DRE-Servers 172.16.50.0', 'DRE-Servers 172.16.60.x', 'MGT-Home 192.168.1.0',
|
||||
'MGT-Servers 172.16.10.0', 'VPOT8-Mgmt 172.20.10.0/27', 'VPOT8-Servers 172.20.10.32/27',
|
||||
'VPOT8-Servers 172.20.10.64_26']
|
||||
|
||||
- 1 vCenter servers:
|
||||
['vcsa']
|
||||
['vcsa']
|
||||
|
||||
- 10 VLANs:
|
||||
[0, 20, 30, 40, 1610, 1620, 1630, 1640, 1650, 1660]
|
||||
[0, 20, 30, 40, 1610, 1620, 1630, 1640, 1650, 1660]
|
||||
|
||||
- 2 Datacenters:
|
||||
['Lab', 'Other Lab']
|
||||
['Lab', 'Other Lab']
|
||||
```
|
||||
|
||||
It then starts prompting for the additional details which will be needed:
|
||||
|
@ -570,8 +575,8 @@ So now phpIPAM knows about the vSphere networks I care about, and it can keep tr
|
|||
|
||||
... but I haven't actually *deployed* an agent yet. I'll do that by following the same basic steps [described here](/tanzu-community-edition-k8s-homelab/#phpipam-agent) to spin up my `phpipam-agent` on Kubernetes, and I'll plug in that automagically-generated code for the `IPAM_AGENT_KEY` environment variable:
|
||||
|
||||
```yaml {linenos=true}
|
||||
---
|
||||
```yaml
|
||||
# torchlight! {"lineNumbers": true}
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
|
|
|
@ -24,35 +24,35 @@ comment: true # Disable comment if false.
|
|||
It's super handy when a Linux config file is loaded with comments to tell you precisely how to configure the thing, but all those comments can really get in the way when you're trying to review the current configuration.
|
||||
|
||||
Next time, instead of scrolling through page after page of lengthy embedded explanations, just use:
|
||||
```command
|
||||
egrep -v "^\s*(#|$)" $filename
|
||||
```shell
|
||||
egrep -v "^\s*(#|$)" $filename # [tl! .cmd]
|
||||
```
|
||||
|
||||
For added usefulness, I alias this command to `ccat` (which my brain interprets as "commentless cat") in [my `~/.zshrc`](https://github.com/jbowdre/dotfiles/blob/main/zsh/.zshrc):
|
||||
```command
|
||||
```shell
|
||||
alias ccat='egrep -v "^\s*(#|$)"'
|
||||
```
|
||||
|
||||
Now instead of viewing all 75 lines of a [mostly-default Vagrantfile](/create-vms-chromebook-hashicorp-vagrant), I just see the 7 that matter:
|
||||
```command-session
|
||||
wc -l Vagrantfile
|
||||
75 Vagrantfile
|
||||
```shell
|
||||
wc -l Vagrantfile # [tl! .cmd]
|
||||
75 Vagrantfile # [tl! .nocopy]
|
||||
```
|
||||
|
||||
```command-session
|
||||
ccat Vagrantfile
|
||||
Vagrant.configure("2") do |config|
|
||||
```shell
|
||||
ccat Vagrantfile # [tl! .cmd]
|
||||
Vagrant.configure("2") do |config| # [tl! .nocopy:start]
|
||||
config.vm.box = "oopsme/windows11-22h2"
|
||||
config.vm.provider :libvirt do |libvirt|
|
||||
libvirt.cpus = 4
|
||||
libvirt.memory = 4096
|
||||
end
|
||||
end
|
||||
end # [tl! .nocopy:end]
|
||||
```
|
||||
|
||||
```command-session
|
||||
ccat Vagrantfile | wc -l
|
||||
7
|
||||
```shell
|
||||
ccat Vagrantfile | wc -l # [tl! .cmd]
|
||||
7 # [tl! .nocopy]
|
||||
```
|
||||
|
||||
Nice!
|
||||
|
|
|
@ -67,8 +67,8 @@ Anyway, after switching to the cheaper Standard tier I can click on the **Extern
|
|||
|
||||
##### Security Configuration
|
||||
The **Security** section lets me go ahead and upload an SSH public key that I can then use for logging into the instance once it's running. Of course, that means I'll first need to generate a key pair for this purpose:
|
||||
```command
|
||||
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_wireguard
|
||||
```shell
|
||||
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_wireguard # [tl! .cmd]
|
||||
```
|
||||
|
||||
Okay, now that I've got my keys, I can click the **Add Item** button and paste in the contents of `~/.ssh/id_ed25519_wireguard.pub`.
|
||||
|
@ -90,63 +90,64 @@ I'll click **Create** and move on.
|
|||
|
||||
#### WireGuard Server Setup
|
||||
Once the **Compute Engine > Instances** [page](https://console.cloud.google.com/compute/instances) indicates that the instance is ready, I can make a note of the listed public IP and then log in via SSH:
|
||||
```command
|
||||
ssh -i ~/.ssh/id_25519_wireguard {PUBLIC_IP}
|
||||
```shell
|
||||
ssh -i ~/.ssh/id_25519_wireguard {PUBLIC_IP} # [tl! .cmd]
|
||||
```
|
||||
|
||||
##### Preparation
|
||||
And, as always, I'll first make sure the OS is fully updated before doing anything else:
|
||||
```command
|
||||
sudo apt update
|
||||
```shell
|
||||
sudo apt update # [tl! .cmd:1]
|
||||
sudo apt upgrade
|
||||
```
|
||||
|
||||
Then I'll install `ufw` to easily manage the host firewall, `qrencode` to make it easier to generate configs for mobile clients, `openresolv` to avoid [this issue](https://superuser.com/questions/1500691/usr-bin-wg-quick-line-31-resolvconf-command-not-found-wireguard-debian/1500896), and `wireguard` to, um, guard the wires:
|
||||
```command
|
||||
sudo apt install ufw qrencode openresolv wireguard
|
||||
```shell
|
||||
sudo apt install ufw qrencode openresolv wireguard # [tl! .cmd]
|
||||
```
|
||||
|
||||
Configuring the host firewall with `ufw` is very straight forward:
|
||||
```shell
|
||||
# First, SSH:
|
||||
sudo ufw allow 22/tcp
|
||||
# and WireGuard:
|
||||
sudo ufw allow 51820/udp
|
||||
# Then turn it on:
|
||||
sudo ufw enable
|
||||
# First, SSH: # [tl! .nocopy]
|
||||
sudo ufw allow 22/tcp # [tl! .cmd]
|
||||
# and WireGuard: # [tl! .nocopy]
|
||||
sudo ufw allow 51820/udp # [tl! .cmd]
|
||||
# Then turn it on: # [tl! .nocopy]
|
||||
sudo ufw enable # [tl! .cmd]
|
||||
```
|
||||
|
||||
The last preparatory step is to enable packet forwarding in the kernel so that the instance will be able to route traffic between the remote clients and my home network (once I get to that point). I can configure that on-the-fly with:
|
||||
```command
|
||||
sudo sysctl -w net.ipv4.ip_forward=1
|
||||
```shell
|
||||
sudo sysctl -w net.ipv4.ip_forward=1 # [tl! .cmd]
|
||||
```
|
||||
|
||||
To make it permanent, I'll edit `/etc/sysctl.conf` and uncomment the same line:
|
||||
```command
|
||||
sudo vi /etc/sysctl.conf
|
||||
```shell
|
||||
sudo vi /etc/sysctl.conf # [tl! .cmd]
|
||||
```
|
||||
```cfg
|
||||
```ini
|
||||
# Uncomment the next line to enable packet forwarding for IPv4
|
||||
net.ipv4.ip_forward=1
|
||||
```
|
||||
|
||||
##### WireGuard Interface Config
|
||||
I'll switch to the root user, move into the `/etc/wireguard` directory, and issue `umask 077` so that the files I'm about to create will have a very limited permission set (to be accessible by root, and _only_ root):
|
||||
```command
|
||||
sudo -i
|
||||
cd /etc/wireguard
|
||||
```shell
|
||||
sudo -i # [tl! .cmd]
|
||||
cd /etc/wireguard # [tl! .cmd_root:1]
|
||||
umask 077
|
||||
```
|
||||
|
||||
Then I can use the `wg genkey` command to generate the server's private key, save it to a file called `server.key`, pass it through `wg pubkey` to generate the corresponding public key, and save that to `server.pub`:
|
||||
```command
|
||||
wg genkey | tee server.key | wg pubkey > server.pub
|
||||
```shell
|
||||
wg genkey | tee server.key | wg pubkey > server.pub # [tl! .cmd_root]
|
||||
```
|
||||
|
||||
As I mentioned earlier, WireGuard will create a virtual network interface using an internal network to pass traffic between the WireGuard peers. By convention, that interface is `wg0` and it draws its configuration from a file in `/etc/wireguard` named `wg0.conf`. I could create a configuration file with a different name and thus wind up with a different interface name as well, but I'll stick with tradition to keep things easy to follow.
|
||||
|
||||
The format of the interface configuration file will need to look something like this:
|
||||
```cfg
|
||||
```ini
|
||||
# torchlight! {"lineNumbers": true}
|
||||
[Interface] # this section defines the local WireGuard interface
|
||||
Address = # CIDR-format IP address of the virtual WireGuard interface
|
||||
ListenPort = # WireGuard listens on this port for incoming traffic (randomized if not specified)
|
||||
|
@ -164,7 +165,8 @@ AllowedIPs = # which IPs will be routed to this peer
|
|||
There will be a single `[Interface]` section in each peer's configuration file, but they may include multiple `[Peer]` sections. For my config, I'll use the `10.200.200.0/24` network for WireGuard, and let this server be `10.200.200.1`, the VyOS router in my home lab `10.200.200.2`, and I'll assign IPs to the other peers from there. I found a note that Google Cloud uses an MTU size of `1460` bytes so that's what I'll set on this end. I'm going to configure WireGuard to use the VyOS router as the DNS server, and I'll specify my internal `lab.bowdre.net` search domain. Finally, I'll leverage the `PostUp` and `PostDown` directives to enable and disable NAT so that the server will be able to forward traffic between networks for me.
|
||||
|
||||
So here's the start of my GCP WireGuard server's `/etc/wireguard/wg0.conf`:
|
||||
```cfg
|
||||
```ini
|
||||
# torchlight! {"lineNumbers": true}
|
||||
# /etc/wireguard/wg0.conf
|
||||
[Interface]
|
||||
Address = 10.200.200.1/24
|
||||
|
@ -177,25 +179,25 @@ PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING
|
|||
```
|
||||
|
||||
I don't have any other peers ready to add to this config yet, but I can go ahead and bring up the interface all the same. I'm going to use the `wg-quick` wrapper instead of calling `wg` directly since it simplifies a bit of the configuration, but first I'll need to enable the `wg-quick@{INTERFACE}` service so that it will run automatically at startup:
|
||||
```command
|
||||
systemctl enable wg-quick@wg0
|
||||
```shell
|
||||
systemctl enable wg-quick@wg0 # [tl! .cmd_root:1]
|
||||
systemctl start wg-quick@wg0
|
||||
```
|
||||
|
||||
I can now bring up the interface with `wg-quick up wg0` and check the status with `wg show`:
|
||||
```commandroot-session
|
||||
wg-quick up wg0
|
||||
[#] ip link add wg0 type wireguard
|
||||
```shell
|
||||
wg-quick up wg0 # [tl! .cmd_root]
|
||||
[#] ip link add wg0 type wireguard # [tl! .nocopy:start]
|
||||
[#] wg setconf wg0 /dev/fd/63
|
||||
[#] ip -4 address add 10.200.200.1/24 dev wg0
|
||||
[#] ip link set mtu 1460 up dev wg0
|
||||
[#] resolvconf -a wg0 -m 0 -x
|
||||
[#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens4 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o ens4 -j MASQUERADE
|
||||
[#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens4 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o ens4 -j MASQUERADE # [tl! .nocopy:end]
|
||||
```
|
||||
|
||||
```commandroot-session
|
||||
root@wireguard:~# wg show
|
||||
interface: wg0
|
||||
```shell
|
||||
wg show # [tl! .cmd_root]
|
||||
interface: wg0 # [tl! .nocopy:3]
|
||||
public key: {GCP_PUBLIC_IP}
|
||||
private key: (hidden)
|
||||
listening port: 51820
|
||||
|
@ -205,45 +207,45 @@ I'll come back here once I've got a peer config to add.
|
|||
|
||||
### Configure VyoS Router as WireGuard Peer
|
||||
Comparatively, configuring WireGuard on VyOS is a bit more direct. I'll start by entering configuration mode and generating and binding a key pair for this interface:
|
||||
```commandroot
|
||||
configure
|
||||
```shell
|
||||
configure # [tl! .cmd_root:1]
|
||||
run generate pki wireguard key-pair install interface wg0
|
||||
```
|
||||
|
||||
And then I'll configure the rest of the options needed for the interface:
|
||||
```commandroot
|
||||
set interfaces wireguard wg0 address '10.200.200.2/24'
|
||||
```shell
|
||||
set interfaces wireguard wg0 address '10.200.200.2/24' # [tl! .cmd_root:start]
|
||||
set interfaces wireguard wg0 description 'VPN to GCP'
|
||||
set interfaces wireguard wg0 peer wireguard-gcp address '{GCP_PUBLIC_IP}'
|
||||
set interfaces wireguard wg0 peer wireguard-gcp allowed-ips '0.0.0.0/0'
|
||||
set interfaces wireguard wg0 peer wireguard-gcp persistent-keepalive '25'
|
||||
set interfaces wireguard wg0 peer wireguard-gcp port '51820'
|
||||
set interfaces wireguard wg0 peer wireguard-gcp public-key '{GCP_PUBLIC_KEY}'
|
||||
set interfaces wireguard wg0 peer wireguard-gcp public-key '{GCP_PUBLIC_KEY}' # [tl! .cmd_root:end]
|
||||
```
|
||||
|
||||
Note that this time I'm allowing all IPs (`0.0.0.0/0`) so that this WireGuard interface will pass traffic intended for any destination (whether it's local, remote, or on the Internet). And I'm specifying a [25-second `persistent-keepalive` interval](https://www.wireguard.com/quickstart/#nat-and-firewall-traversal-persistence) to help ensure that this NAT-ed tunnel stays up even when it's not actively passing traffic - after all, I'll need the GCP-hosted peer to be able to initiate the connection so I can access the home network remotely.
|
||||
|
||||
While I'm at it, I'll also add a static route to ensure traffic for the WireGuard tunnel finds the right interface:
|
||||
```commandroot
|
||||
set protocols static route 10.200.200.0/24 interface wg0
|
||||
```shell
|
||||
set protocols static route 10.200.200.0/24 interface wg0 # [tl! .cmd_root]
|
||||
```
|
||||
|
||||
And I'll add the new `wg0` interface as a listening address for the VyOS DNS forwarder:
|
||||
```commandroot
|
||||
set service dns forwarding listen-address '10.200.200.2'
|
||||
```shell
|
||||
set service dns forwarding listen-address '10.200.200.2' # [tl! .cmd_root]
|
||||
```
|
||||
|
||||
I can use the `compare` command to verify the changes I've made, and then apply and save the updated config:
|
||||
```commandroot
|
||||
compare
|
||||
```shell
|
||||
compare # [tl! .cmd_root:2]
|
||||
commit
|
||||
save
|
||||
```
|
||||
|
||||
I can check the status of WireGuard on VyOS (and view the public key!) like so:
|
||||
```commandroot-session
|
||||
show interfaces wireguard wg0 summary
|
||||
interface: wg0
|
||||
```shell
|
||||
show interfaces wireguard wg0 summary # [tl! .cmd_root]
|
||||
interface: wg0 # [tl! .nocopy:start]
|
||||
public key: {VYOS_PUBLIC_KEY}
|
||||
private key: (hidden)
|
||||
listening port: 43543
|
||||
|
@ -252,13 +254,13 @@ peer: {GCP_PUBLIC_KEY}
|
|||
endpoint: {GCP_PUBLIC_IP}:51820
|
||||
allowed ips: 0.0.0.0/0
|
||||
transfer: 0 B received, 592 B sent
|
||||
persistent keepalive: every 25 seconds
|
||||
persistent keepalive: every 25 seconds # [tl! .nocopy:end]
|
||||
```
|
||||
|
||||
See? That part was much easier to set up! But it doesn't look like it's actually passing traffic yet... because while the VyOS peer has been configured with the GCP peer's public key, the GCP peer doesn't know anything about the VyOS peer yet.
|
||||
|
||||
So I'll copy `{VYOS_PUBLIC_KEY}` and SSH back to the GCP instance to finish that configuration. Once I'm there, I can edit `/etc/wireguard/wg0.conf` as root and add in a new `[Peer]` section at the bottom, like this:
|
||||
```cfg
|
||||
```ini
|
||||
[Peer]
|
||||
# VyOS
|
||||
PublicKey = {VYOS_PUBLIC_KEY}
|
||||
|
@ -268,17 +270,17 @@ AllowedIPs = 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16
|
|||
This time, I'm telling WireGuard that the new peer has IP `10.200.200.2` but that it should also get traffic destined for the `192.168.1.0/24` and `172.16.0.0/16` networks, my home and lab networks. Again, the `AllowedIPs` parameter is used for WireGuard's Cryptokey Routing so that it can keep track of which traffic goes to which peers (and which key to use for encryption).
|
||||
|
||||
After saving the file, I can either restart WireGuard by bringing the interface down and back up (`wg-quick down wg0 && wg-quick up wg0`), or I can reload it on the fly with:
|
||||
```command
|
||||
sudo -i
|
||||
wg syncconf wg0 <(wg-quick strip wg0)
|
||||
```shell
|
||||
sudo -i # [tl! .cmd]
|
||||
wg syncconf wg0 <(wg-quick strip wg0) # [tl! .cmd_root]
|
||||
```
|
||||
|
||||
(I can't just use `wg syncconf wg0` directly since `/etc/wireguard/wg0.conf` includes the `PostUp`/`PostDown` commands which can only be parsed by the `wg-quick` wrapper, so I'm using `wg-quick strip {INTERFACE}` to grab the contents of the config file, remove the problematic bits, and then pass what's left to the `wg syncconf {INTERFACE}` command to update the current running config.)
|
||||
|
||||
Now I can check the status of WireGuard on the GCP end:
|
||||
```commandroot-session
|
||||
wg show
|
||||
interface: wg0
|
||||
```shell
|
||||
wg show # [tl! .cmd_root]
|
||||
interface: wg0 # [tl! .nocopy:start]
|
||||
public key: {GCP_PUBLIC_KEY}
|
||||
private key: (hidden)
|
||||
listening port: 51820
|
||||
|
@ -287,28 +289,28 @@ peer: {VYOS_PUBLIC_KEY}
|
|||
endpoint: {VYOS_PUBLIC_IP}:43990
|
||||
allowed ips: 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16
|
||||
latest handshake: 55 seconds ago
|
||||
transfer: 1.23 KiB received, 368 B sent
|
||||
transfer: 1.23 KiB received, 368 B sent # [tl! .nocopy:end]
|
||||
```
|
||||
|
||||
Hey, we're passing traffic now! And I can verify that I can ping stuff on my home and lab networks from the GCP instance:
|
||||
```command-session
|
||||
ping -c 1 192.168.1.5
|
||||
PING 192.168.1.5 (192.168.1.5) 56(84) bytes of data.
|
||||
```shell
|
||||
ping -c 1 192.168.1.5 # [tl! .cmd]
|
||||
PING 192.168.1.5 (192.168.1.5) 56(84) bytes of data. # [tl! .nocopy:start]
|
||||
64 bytes from 192.168.1.5: icmp_seq=1 ttl=127 time=35.6 ms
|
||||
|
||||
--- 192.168.1.5 ping statistics ---
|
||||
1 packets transmitted, 1 received, 0% packet loss, time 0ms
|
||||
rtt min/avg/max/mdev = 35.598/35.598/35.598/0.000 ms
|
||||
rtt min/avg/max/mdev = 35.598/35.598/35.598/0.000 ms # [tl! .nocopy:end]
|
||||
```
|
||||
|
||||
```command-session
|
||||
ping -c 1 172.16.10.1
|
||||
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data.
|
||||
```shell
|
||||
ping -c 1 172.16.10.1 # [tl! .cmd]
|
||||
PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data. # [tl! .nocopy:start]
|
||||
64 bytes from 172.16.10.1: icmp_seq=1 ttl=64 time=35.3 ms
|
||||
|
||||
--- 172.16.10.1 ping statistics ---
|
||||
1 packets transmitted, 1 received, 0% packet loss, time 0ms
|
||||
rtt min/avg/max/mdev = 35.275/35.275/35.275/0.000 ms
|
||||
rtt min/avg/max/mdev = 35.275/35.275/35.275/0.000 ms # [tl! .nocopy:end]
|
||||
```
|
||||
|
||||
Cool!
|
||||
|
@ -347,17 +349,14 @@ I _shouldn't_ need the keepalive for the "Road Warrior" peers connecting to the
|
|||
|
||||
Now I can go ahead and save this configuration, but before I try (and fail) to connect I first need to tell the cloud-hosted peer about the Chromebook. So I fire up an SSH session to my GCP instance, become root, and edit the WireGuard configuration to add a new `[Peer]` section.
|
||||
|
||||
```command
|
||||
sudo -i
|
||||
```
|
||||
|
||||
```commandroot
|
||||
vi /etc/wireguard/wg0.conf
|
||||
```shell
|
||||
sudo -i # [tl! .cmd]
|
||||
vi /etc/wireguard/wg0.conf # [tl! .cmd_root]
|
||||
```
|
||||
|
||||
Here's the new section that I'll add to the bottom of the config:
|
||||
|
||||
```cfg
|
||||
```ini
|
||||
[Peer]
|
||||
# Chromebook
|
||||
PublicKey = {CB_PUBLIC_KEY}
|
||||
|
@ -367,7 +366,8 @@ AllowedIPs = 10.200.200.3/32
|
|||
This one is acting as a single-node endpoint (rather than an entryway into other networks like the VyOS peer) so setting `AllowedIPs` to only the peer's IP makes sure that WireGuard will only send it traffic specifically intended for this peer.
|
||||
|
||||
So my complete `/etc/wireguard/wg0.conf` looks like this so far:
|
||||
```cfg
|
||||
```ini
|
||||
# torchlight! {"lineNumbers": true}
|
||||
# /etc/wireguard/wg0.conf
|
||||
[Interface]
|
||||
Address = 10.200.200.1/24
|
||||
|
@ -378,7 +378,7 @@ DNS = 10.200.200.2, lab.bowdre.net
|
|||
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens4 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o ens4 -j MASQUERADE
|
||||
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o ens4 -j MASQUERADE; ip6tables -D FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -D POSTROUTING -o ens4 -j MASQUERADE
|
||||
|
||||
[Peer]
|
||||
[Peer] # [tl! focus:start]
|
||||
# VyOS
|
||||
PublicKey = {VYOS_PUBLIC_KEY}
|
||||
AllowedIPs = 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16
|
||||
|
@ -386,19 +386,19 @@ AllowedIPs = 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16
|
|||
[Peer]
|
||||
# Chromebook
|
||||
PublicKey = {CB_PUBLIC_KEY}
|
||||
AllowedIPs = 10.200.200.3/32
|
||||
AllowedIPs = 10.200.200.3/32 # [tl! focus:end]
|
||||
```
|
||||
|
||||
Now to save the file and reload the WireGuard configuration again:
|
||||
```commandroot
|
||||
wg syncconf wg0 <(wg-quick strip wg0)
|
||||
```shell
|
||||
wg syncconf wg0 <(wg-quick strip wg0) # [tl! .cmd_root]
|
||||
```
|
||||
|
||||
At this point I can activate the connection in the WireGuard Android app, wait a few seconds, and check with `wg show` to confirm that the tunnel has been established successfully:
|
||||
|
||||
```commandroot-session
|
||||
wg show
|
||||
interface: wg0
|
||||
```shell
|
||||
wg show # [tl! .cmd_root]
|
||||
interface: wg0 # [tl! .nocopy:start]
|
||||
public key: {GCP_PUBLIC_KEY}
|
||||
private key: (hidden)
|
||||
listening port: 51820
|
||||
|
@ -413,7 +413,7 @@ peer: {CB_PUBLIC_KEY}
|
|||
endpoint: {CB_PUBLIC_IP}:33752
|
||||
allowed ips: 10.200.200.3/32
|
||||
latest handshake: 48 seconds ago
|
||||
transfer: 169.17 KiB received, 808.33 KiB sent
|
||||
transfer: 169.17 KiB received, 808.33 KiB sent # [tl! .nocopy:end]
|
||||
```
|
||||
|
||||
And I can even access my homelab when not at home!
|
||||
|
@ -423,23 +423,21 @@ And I can even access my homelab when not at home!
|
|||
Being able to copy-and-paste the required public keys between the WireGuard app and the SSH session to the GCP instance made it relatively easy to set up the Chromebook, but things could be a bit trickier on a phone without that kind of access. So instead I will create the phone's configuration on the WireGuard server in the cloud, render that config file as a QR code, and simply scan that through the phone's WireGuard app to import the settings.
|
||||
|
||||
I'll start by SSHing to the GCP instance, elevating to root, setting the restrictive `umask` again, and creating a new folder to store client configurations.
|
||||
```command
|
||||
sudo -i
|
||||
```
|
||||
|
||||
```commandroot
|
||||
umask 077
|
||||
```shell
|
||||
sudo -i # [tl! .cmd]
|
||||
umask 077 # [tl! .cmd_root:2]
|
||||
mkdir /etc/wireguard/clients
|
||||
cd /etc/wireguard/clients
|
||||
```
|
||||
|
||||
As before, I'll use the built-in `wg` commands to generate the private and public key pair:
|
||||
```command
|
||||
wg genkey | tee phone1.key | wg pubkey > phone1.pub
|
||||
```shell
|
||||
wg genkey | tee phone1.key | wg pubkey > phone1.pub # [tl! .cmd_root]
|
||||
```
|
||||
|
||||
I can then use those keys to assemble the config for the phone:
|
||||
```cfg
|
||||
```ini
|
||||
# torchlight! {"lineNumbers": true}
|
||||
# /etc/wireguard/clients/phone1.conf
|
||||
[Interface]
|
||||
PrivateKey = {PHONE1_PRIVATE_KEY}
|
||||
|
@ -453,20 +451,20 @@ Endpoint = {GCP_PUBLIC_IP}:51820
|
|||
```
|
||||
|
||||
I'll also add the interface address and corresponding public key to a new `[Peer]` section of `/etc/wireguard/wg0.conf`:
|
||||
```cfg
|
||||
```ini
|
||||
[Peer]
|
||||
PublicKey = {PHONE1_PUBLIC_KEY}
|
||||
AllowedIPs = 10.200.200.4/32
|
||||
```
|
||||
|
||||
And reload the WireGuard config:
|
||||
```commandroot
|
||||
wg syncconf wg0 <(wg-quick strip wg0)
|
||||
```shell
|
||||
wg syncconf wg0 <(wg-quick strip wg0) # [tl! .cmd_root]
|
||||
```
|
||||
|
||||
Back in the `clients/` directory, I can use `qrencode` to render the phone configuration file (keys and all!) as a QR code:
|
||||
```commandroot
|
||||
qrencode -t ansiutf8 < phone1.conf
|
||||
```shell
|
||||
qrencode -t ansiutf8 < phone1.conf # [tl! .cmd_root]
|
||||
```
|
||||
![QR code config](20211028_qrcode_config.png)
|
||||
|
||||
|
@ -478,8 +476,8 @@ I can even access my vSphere lab environment - not that it offers a great mobile
|
|||
|
||||
Before moving on too much further, though, I'm going to clean up the keys and client config file that I generated on the GCP instance. It's not great hygiene to keep a private key stored on the same system it's used to access.
|
||||
|
||||
```commandroot
|
||||
rm -f /etc/wireguard/clients/*
|
||||
```shell
|
||||
rm -f /etc/wireguard/clients/* # [tl! .cmd_root]
|
||||
```
|
||||
|
||||
##### Bonus: Automation!
|
||||
|
|
|
@ -31,8 +31,8 @@ It took a bit of fumbling, but this article describes what it took to get a Vagr
|
|||
### Install the prerequisites
|
||||
There are are a few packages which need to be installed before we can move on to the Vagrant-specific stuff. It's quite possible that these are already on your system.... but if they *aren't* already present you'll have a bad problem[^problem].
|
||||
|
||||
```command-session
|
||||
sudo apt update && sudo apt install \
|
||||
```shell
|
||||
sudo apt update && sudo apt install \ # [tl! .cmd]
|
||||
build-essential \
|
||||
gpg \
|
||||
lsb-release \
|
||||
|
@ -42,42 +42,42 @@ sudo apt update && sudo apt install \
|
|||
[^problem]: and [will not go to space today](https://xkcd.com/1133/).
|
||||
|
||||
I'll be configuring Vagrant to use [`libvirt`](https://libvirt.org/) to interface with the [Kernel Virtual Machine (KVM)](https://www.linux-kvm.org/page/Main_Page) virtualization solution (rather than something like VirtualBox that would bring more overhead) so I'll need to install some packages for that as well:
|
||||
```command
|
||||
sudo apt install virt-manager libvirt-dev
|
||||
```shell
|
||||
sudo apt install virt-manager libvirt-dev # [tl! .cmd]
|
||||
```
|
||||
|
||||
And to avoid having to `sudo` each time I interact with `libvirt` I'll add myself to that group:
|
||||
```command
|
||||
sudo gpasswd -a $USER libvirt ; newgrp libvirt
|
||||
```shell
|
||||
sudo gpasswd -a $USER libvirt ; newgrp libvirt # [tl! .cmd]
|
||||
```
|
||||
|
||||
And to avoid [this issue](https://github.com/virt-manager/virt-manager/issues/333) I'll make a tweak to the `qemu.conf` file:
|
||||
```command
|
||||
echo "remember_owner = 0" | sudo tee -a /etc/libvirt/qemu.conf
|
||||
```shell
|
||||
echo "remember_owner = 0" | sudo tee -a /etc/libvirt/qemu.conf # [tl! .cmd:1]
|
||||
sudo systemctl restart libvirtd
|
||||
```
|
||||
|
||||
I'm also going to use `rsync` to share a [synced folder](https://developer.hashicorp.com/vagrant/docs/synced-folders/basic_usage) between the host and the VM guest so I'll need to make sure that's installed too:
|
||||
```command
|
||||
sudo apt install rsync
|
||||
```shell
|
||||
sudo apt install rsync # [tl! .cmd]
|
||||
```
|
||||
|
||||
### Install Vagrant
|
||||
With that out of the way, I'm ready to move on to the business of installing Vagrant. I'll start by adding the HashiCorp repository:
|
||||
```command
|
||||
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg
|
||||
```shell
|
||||
wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg # [tl! .cmd:1]
|
||||
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
|
||||
```
|
||||
|
||||
I'll then install the Vagrant package:
|
||||
```command
|
||||
sudo apt update
|
||||
```shell
|
||||
sudo apt update # [tl! .cmd:1]
|
||||
sudo apt install vagrant
|
||||
```
|
||||
|
||||
I also need to install the [`vagrant-libvirt` plugin](https://github.com/vagrant-libvirt/vagrant-libvirt) so that Vagrant will know how to interact with `libvirt`:
|
||||
```command
|
||||
vagrant plugin install vagrant-libvirt
|
||||
```shell
|
||||
vagrant plugin install vagrant-libvirt # [tl! .cmd]
|
||||
```
|
||||
|
||||
### Create a lightweight VM
|
||||
|
@ -86,18 +86,15 @@ Now I can get to the business of creating my first VM with Vagrant!
|
|||
Vagrant VMs are distributed as Boxes, and I can browse some published Boxes at [app.vagrantup.com/boxes/search?provider=libvirt](https://app.vagrantup.com/boxes/search?provider=libvirt) (applying the `provider=libvirt` filter so that I only see Boxes which will run on my chosen virtualization provider). For my first VM, I'll go with something light and simple: [`generic/alpine38`](https://app.vagrantup.com/generic/boxes/alpine38).
|
||||
|
||||
So I'll create a new folder to contain the Vagrant configuration:
|
||||
```command
|
||||
mkdir vagrant-alpine
|
||||
```shell
|
||||
mkdir vagrant-alpine # [tl! .cmd:1]
|
||||
cd vagrant-alpine
|
||||
```
|
||||
|
||||
And since I'm referencing a Vagrant Box which is published on Vagrant Cloud, downloading the config is as simple as:
|
||||
```command
|
||||
vagrant init generic/alpine38
|
||||
```
|
||||
|
||||
That lets me know that
|
||||
```text
|
||||
```shell
|
||||
vagrant init generic/alpine38 # [tl! .cmd]
|
||||
# [tl! .nocopy:4]
|
||||
A `Vagrantfile` has been placed in this directory. You are now
|
||||
ready to `vagrant up` your first virtual environment! Please read
|
||||
the comments in the Vagrantfile as well as documentation on
|
||||
|
@ -105,8 +102,8 @@ the comments in the Vagrantfile as well as documentation on
|
|||
```
|
||||
|
||||
Before I `vagrant up` the joint, I do need to make a quick tweak to the default Vagrantfile, which is what tells Vagrant how to configure the VM. By default, Vagrant will try to create a synced folder using NFS and will throw a nasty error when that (inevitably[^inevitable]) fails. So I'll open up the Vagrantfile to review and edit it:
|
||||
```command
|
||||
vim Vagrantfile
|
||||
```shell
|
||||
vim Vagrantfile # [tl! .cmd]
|
||||
```
|
||||
|
||||
Most of the default Vagrantfile is commented out. Here's the entirey of the configuration *without* the comments:
|
||||
|
@ -118,8 +115,11 @@ end
|
|||
|
||||
There's not a lot there, is there? Well I'm just going to add these two lines somewhere between the `Vagrant.configure()` and `end` lines:
|
||||
```ruby
|
||||
config.nfs.verify_installed = false
|
||||
Vagrant.configure("2") do |config|
|
||||
config.vm.box = "generic/alpine38"
|
||||
config.nfs.verify_installed = false # [tl! focus:1 highlight:1]
|
||||
config.vm.synced_folder '.', '/vagrant', type: 'rsync'
|
||||
end
|
||||
```
|
||||
|
||||
The first line tells Vagrant not to bother checking to see if NFS is installed, and will use `rsync` to share the local directory with the VM guest, where it will be mounted at `/vagrant`.
|
||||
|
@ -134,9 +134,9 @@ end
|
|||
```
|
||||
|
||||
With that, I'm ready to fire up this VM with `vagrant up`! Vagrant will look inside `Vagrantfile` to see the config, pull down the `generic/alpine38` Box from Vagrant Cloud, boot the VM, configure it so I can SSH in to it, and mount the synced folder:
|
||||
```command-session
|
||||
vagrant up
|
||||
Bringing machine 'default' up with 'libvirt' provider...
|
||||
```shell
|
||||
vagrant up # [tl! .cmd]
|
||||
Bringing machine 'default' up with 'libvirt' provider... # [tl! .nocopy:start]
|
||||
==> default: Box 'generic/alpine38' could not be found. Attempting to find and install...
|
||||
default: Box Provider: libvirt
|
||||
default: Box Version: >= 0
|
||||
|
@ -156,14 +156,14 @@ Bringing machine 'default' up with 'libvirt' provider...
|
|||
[...]
|
||||
default: Key inserted! Disconnecting and reconnecting using new SSH key...
|
||||
==> default: Machine booted and ready!
|
||||
==> default: Rsyncing folder: /home/john/projects/vagrant-alpine/ => /vagrant
|
||||
==> default: Rsyncing folder: /home/john/projects/vagrant-alpine/ => /vagrant # [tl! .nocopy:end]
|
||||
```
|
||||
|
||||
And then I can use `vagrant ssh` to log in to the new VM:
|
||||
```command-session
|
||||
vagrant ssh
|
||||
alpine38:~$ cat /etc/os-release
|
||||
NAME="Alpine Linux"
|
||||
```shell
|
||||
vagrant ssh # [tl! .cmd:1]
|
||||
cat /etc/os-release
|
||||
NAME="Alpine Linux" # [tl! .nocopy:5]
|
||||
ID=alpine
|
||||
VERSION_ID=3.8.5
|
||||
PRETTY_NAME="Alpine Linux v3.8"
|
||||
|
@ -172,20 +172,20 @@ BUG_REPORT_URL="http://bugs.alpinelinux.org"
|
|||
```
|
||||
|
||||
I can also verify that the synced folder came through as expected:
|
||||
```command-session
|
||||
ls -l /vagrant
|
||||
total 4
|
||||
```shell
|
||||
ls -l /vagrant # [tl! .cmd]
|
||||
total 4 # [tl! .nocopy:1]
|
||||
-rw-r--r-- 1 vagrant vagrant 3117 Feb 20 15:51 Vagrantfile
|
||||
```
|
||||
|
||||
Once I'm finished poking at this VM, shutting it down is as easy as:
|
||||
```command
|
||||
vagrant halt
|
||||
```shell
|
||||
vagrant halt # [tl! .cmd]
|
||||
```
|
||||
|
||||
And if I want to clean up and remove all traces of the VM, that's just:
|
||||
```command
|
||||
vagrant destroy
|
||||
```shell
|
||||
vagrant destroy # [tl! .cmd]
|
||||
```
|
||||
|
||||
[^inevitable]: NFS doesn't work properly from within an LXD container, like the ChromeOS Linux development environment.
|
||||
|
@ -200,8 +200,8 @@ Windows 11 makes for a pretty hefty VM which will require significant storage sp
|
|||
{{% /notice %}}
|
||||
|
||||
Again, I'll create a new folder to hold the Vagrant configuration and do a `vagrant init`:
|
||||
```command
|
||||
mkdir vagrant-win11
|
||||
```shell
|
||||
mkdir vagrant-win11 # [tl! .cmd:2]
|
||||
cd vagrant-win11
|
||||
vagrant init oopsme/windows11-22h2
|
||||
```
|
||||
|
@ -211,7 +211,7 @@ And, again, I'll edit the Vagrantfile before starting the VM. This time, though,
|
|||
Vagrant.configure("2") do |config|
|
||||
config.vm.box = "oopsme/windows11-22h2"
|
||||
config.vm.provider :libvirt do |libvirt|
|
||||
libvirt.cpus = 4
|
||||
libvirt.cpus = 4 # [tl! highlight:1]
|
||||
libvirt.memory = 4096
|
||||
end
|
||||
end
|
||||
|
@ -220,23 +220,23 @@ end
|
|||
[^ram]: Note here that `libvirt.memory` is specified in MB. Windows 11 boots happily with 4096 MB of RAM.... and somewhat less so with just 4 MB. *Ask me how I know...*
|
||||
|
||||
Now it's time to bring it up. This one's going to take A While as it syncs the ~12GB Box first.
|
||||
```command
|
||||
vagrant up
|
||||
```shell
|
||||
vagrant up # [tl! .cmd]
|
||||
```
|
||||
|
||||
Eventually it should spit out that lovely **Machine booted and ready!** message and I can log in! I *can* do a `vagrant ssh` again to gain a shell in the Windows environment, but I'll probably want to interact with those sweet sweet graphics. That takes a little bit more effort.
|
||||
|
||||
First, I'll use `virsh -c qemu:///system list` to see the running VM(s):
|
||||
```command-session
|
||||
virsh -c qemu:///system list
|
||||
Id Name State
|
||||
```shell
|
||||
virsh -c qemu:///system list # [tl! .cmd]
|
||||
Id Name State # [tl! .nocopy:2]
|
||||
---------------------------------------
|
||||
10 vagrant-win11_default running
|
||||
```
|
||||
|
||||
Then I can tell `virt-viewer` that I'd like to attach a session there:
|
||||
```command
|
||||
virt-viewer -c qemu:///system -a vagrant-win11_default
|
||||
```shell
|
||||
virt-viewer -c qemu:///system -a vagrant-win11_default # [tl! .cmd]
|
||||
```
|
||||
|
||||
I log in with the default password `vagrant`, and I'm in Windows 11 land!
|
||||
|
|
Loading…
Reference in a new issue