From 0e12308a6606ae6d6e8359571c8d428d92399999 Mon Sep 17 00:00:00 2001 From: John Bowdre Date: Sun, 15 Oct 2023 14:07:45 -0500 Subject: [PATCH 01/53] new post: systemctl-edit-delay-service-startup --- .../index.md | 27 ++++++++++++++++++ .../systemctl-edit.png | Bin 0 -> 60008 bytes 2 files changed, 27 insertions(+) create mode 100644 content/posts/systemctl-edit-delay-service-startup/index.md create mode 100644 content/posts/systemctl-edit-delay-service-startup/systemctl-edit.png diff --git a/content/posts/systemctl-edit-delay-service-startup/index.md b/content/posts/systemctl-edit-delay-service-startup/index.md new file mode 100644 index 0000000..e89b170 --- /dev/null +++ b/content/posts/systemctl-edit-delay-service-startup/index.md @@ -0,0 +1,27 @@ +--- +title: "Using `systemctl edit` to Delay Service Startup" +date: 2023-10-15 +# lastmod: 2023-10-15 +description: "Quick notes on using `systemctl edit` to override a systemd service to delay its startup." +featured: false +toc: false +comment: true +series: Tips # Projects, Scripts +tags: + - crostini + - linux + - tailscale +--- +Following a recent update, I found that the [Linux development environment](https://chromium.googlesource.com/chromiumos/docs/+/HEAD/containers_and_vms.md) on my Framework Chromebook would fail to load if the [Tailscale](/secure-networking-made-simple-with-tailscale) daemon was already running. It seems that the Tailscale virtual interface may have interfered with how the CrOS Terminal app was expecting to connect to the Linux container. I initially worked around the problem by just disabling the `tailscaled` service, but having to remember to start it up manually was a pretty heavy cognitive load. + +Fortunately, it turns out that overriding the service to insert a short startup delay is really easy. I'll just use the `systemctl edit` command to create a quick override configuration: +```shell +sudo systemctl edit tailscaled +``` + +This shows me the existing contents of the `tailscaled.service` definition so I can easily insert some overrides above. In this case, I just want to use `sleep 5` as the `ExecStartPre` command so that the service start will be delayed by 5 seconds: +![systemctl edit](systemctl-edit.png) + +Upon saving the file, it gets installed to `/etc/systemd/system/tailscaled.service.d/override.conf`. Now the Tailscale interface won't automatically come up until a few seconds later, and that's enough to let my Terminal app start up reliably once more. + +Easy peasy. diff --git a/content/posts/systemctl-edit-delay-service-startup/systemctl-edit.png b/content/posts/systemctl-edit-delay-service-startup/systemctl-edit.png new file mode 100644 index 0000000000000000000000000000000000000000..d1e22f6c023b6b33ca3bd65b71466aebe4b5ae9f GIT binary patch literal 60008 zcmeFZWmwc*^e(JYA|;KqGK4fr3J5$1G9cX{ARyf!9Rf<{NVhafcZalqq{J|!gv8LD z@8(IL>-|6HI_Jarb}r|Ga1FoNv-a9+-RoZW+Cd6(&vEXM-??$)1`br}nbM6LsOUFt z+?>6A3;gB5jP|+Kx|GZkVF+~Y#9FN3F{QmV2$W6%2e?OG_rqr#tO)pb}Umtn*E#xia z-=7l4a05CwJdG!deT(+SKd(sCNYr0{rkxu`rNz{#JNp7I*_yC$U8* z7WF3n>WrA3{p5;OXskM1^+ydxeWKqE_Wg9;3}Y!52U_fo#}=i}(}$9wB3>VRM$%S^ zIygT+k2Wxo^&XETG2wx-b0hOsZUhV-`)+3uT$QM>laK+U@yETTg0+}_!w%!1?SF2f z;(GjE<~KWp=z>DSy*OpYE3(Y8c3znM*3Zf{#;XVUp)U$cZjYU=%tVh`%sL|c4y$Tn z@0YqN_OjF(E1Po?8pIEy<~wxYOB~hR4#<~LM)n2k4RN|GS4ot>RvqE9M_bC}Q% z0~4z@7=|LXOk}^rDC55RkX$FW9z8cHxA9y5a*Op(pRNAlDPex6kkr*$9zkjssoTH| z!C__N?G3ewd|55^sF1vb=YWhlOP##jm&bgv$ft}{euoUBtSRyDwFnp7z^ZX{-uY=I zh999F_3dtlmv=rLQ&X!c?=I|+eCBuN`y-y-Ym?-haQl$noF}L zzLRG&Gq+op!qz%3w@Qvdt0%5gw|cg^k4`$@A~o{RvsJh}@u=xapWa8##?}Bk6kp=` zbSRsxpv4n9lSm_M_-g22R8>W$v8^rStr?suU8>MxxRf7A)w7nbn>7pCl=HiYirJ_SB%KgmPoY*F?xFbB7 zH+y%Y#K&~oQo)dt4yE=`z#b zz?bOXyeL)K45@APG5$0$t#$gK-d=kpN@hRVoTAB8PFr@`et^}y=h#0rEYU?j!rUoR z%V2O7C-i>q2t`E%lbyptSuFQGuElX=PNTw@*Qn{Y4e}G%DEN736-T(Mi6re)*6NoW zzkS5$gr?=Fj>1ATaW5RojH|V~q}GKH#hi4ixUs zix1Ko9;RBICBS}L*v)wSsfx1sejIA)WF=-Tj=*}gHtPCXYUw)!`tA-k_sI4Hbs08aZN>WlN8_fMB+JxD4Wh6kdfBWFiF_dj1M&j$Hr zS=nJ}u6qeP9)_PYY}Msn4SAbin_IuVY;Z!zz7$u6i2=7VD?tlxhF`#j!44s)@|Iti(-e6!#(?dX!;W;Si6~ub9in zl#`#$VQ=9~!n=Vb!+wAR4U}TDfqPopZp0q1&>bewhl?R8+z#J@k3cB)EM*gQluCGTny>XH+pVA72ap%5A z5yWCQF9P9DKNGkIRaSCgga34r&b4p6>d9V|M<%kFfM8W`Pca!fkRONP2fV}QA$?06 zYSu|53P)Te4gKp7{p<+pj8*|_(U(0~&j>20&+|@ungc~Z4sLgOJr!uA>S=FRFHyq$ zt*So4i#`su9Z%i;?qdnOMR;|c^9#zWmesA#=z8~g&~U2swAy8^#D8Fo`Vt8$vR31H zvktJ|uLj0emgWt?TP3g6x#|YD zUfJ*MHg}Q}Gct>(N;9qw!=SURV~K7ZK;EC4b7**bp}sX+do>N(k6!uBAajg5tA@>Pk{uT8 z7kkcdFV)d|vMxzo>MB17r}>h_EU;J1oy0uPOi~AhXH*9l?w8ftjkPhe2wS|=%wjVx znl{HT*tdC7`1~fSs6PC*X$&e=fC!y8@ozu&*nCx~(|(}ar&ajtr1LdzL#Ics?s|tt z-D}TTnhOB&PRTuGW^SJJBkz>jnM{tSapq4SowIHOW^(A+NBN|~^CU~&Ezjch4i-n! zhmOrI5>LzHCc62c9PP&6rJM>WTxzdIonc>jw7*Gvy?gy)?l(iL2+wZK=9df4n(|57 z8dY!dzGq0?m%-H&+_nc^sFk37h5&m?>W(^`III?q;bdYbFu2_<%(?g+8$hWL{je7= zv#%pvUTwe6;cYruCp`tcin(RtM~;Vby3_1h;#k2ZB-6bb{O#17@aH^MA^2GtkU3ld zBwcVPEl0`7Vy7q-%q*{(%YSU}w{@SZKySjD>WxMBI&~d|4R8REV!=}cag`rbC@N&# zFB?zz(tkG_KPh05+U1!Ri!vZPgBP%BjZbXfJ!K7hm#v=%rzW$J zHRP1_+5A&#=Q5JvOT5qeh=e08(v12&r}jvb0Mby794B{fV#BomQREOTCfx@~T1v^h zMg}S~Nt}(0no&@`4rk2A!=e2}l?MmGG+XJBd}0=(2DIobQMdp6ntzrZM={t%z1 zT3}G_Rc;TN;v)5C@a6{)%2!Q60z9vq9u;6?O4W4a5GRPFaT@hcj#WfbB<68RNkD<8 z!=Fd<>$xV_8QU4A&yYJc40-P|OcRlI_l16XpBh&Ln;mKA%_^AC)ER~QMORCiS6o|O znB!&PB)carPpw`!!?)#hVJviW81A^JEP<_nDV`DzPc>gHq}H3nY$Otu2<;B-cKU5w zh}h(E=0fuVtZXQG_1d?s}?2UB-fMm*gL z)VS0k&>{7{ki_QQ^mxuCRi=)-y8OTwlIZ31<58*Q-SR#35hGjHnt;&=%kKpFUA&aeheMlbC50 z0tg9r55#BecQZEW=l1->@>Z6<>PfoJDvNw*lB|2Yl2e<lF6cvIJ0n-6rpG4Fa9)&p6WQ*uST6(6R41ZZu z`xk58bVa`kUNf2U=csc*IK8LDAXe{+E@0Xso60p$+jR|E8}7bWku#oG zF!D$4tYx^rA~sY$*Y0jSw1RuR*y~{t`r2K<**YWGiJU`c?9P zyX}MJ>};>eHUVQO86*~@ob|MmT!|7lbVReFHo(O(2a!{m4i-+`rGY7VuKGr;OD!>| zQt-&E7NjgYbTdr?{s3}Q!spg)A*Nd%PynZy>*NYng~flP6mk$` ztyLi?x~`gFvPBRkb8?XCBWr}*^zC2uLLffQ3IG(wR)s5w1fz9eaUa3eD|TE#c#~qw zed=eZmWKwlT72)na#?LQP`~SzG83sWXq2=Od%(1r9kaoGCO;G1?CeV|bLZM@Pa1u6 z)_dq=+ugzMb#%D~?YG;$gkiKF?3QIOj-&p7p!pW^vS*Jvz;3FjOg%q_4~} zl&1Gth%}5aulH4nMn&AuwR?s?w_NdEMrdnRi%x`2Dr~V~u2b})WF>#GetvEO0Z=Ou z_eS<}=$+s=v6Pt>$wFbNl$rEyv1{wmBiiE(oM%gXN@+f&otN`JN5?>FmM987z^tLH zz{vMcocaSC5b@i+p9*<+mq=9q6w?%jnsk`LXtVV|S>t4niHc9(L=GYbaA;2vg5feg zu8r`%7| z5cv148-Hw@T;w*%uv_B=O5ug|6ZT?PVQf|ov~d^!px>A|4#Uz8Tkd;@@c%{^hc4&M zfvZFrWH8AGY2{5}Ln>rG%;2#HSn$)_roHZL5bVC9=uRFkER$Y- zJ`M?x#0zDepsaz3_69B-zVBo;euTj`E$gOM$k82FNzWDl7B?49eDbD#3eWQi=b@j= z{N1y@2cLVt5Xcymn?J$zp5d_2yn~K`q3CY@6N}fJjdDV4 z*yy_g04vruWxppvGTOfq;UEMPy~{d^LuFXHo`^<%wdSUv_XA9b^CM9GxUMh>bMhNY z0Jfq%=d6Qe=$`6&+Bsp0)d#E(2tSDu=~!+uy2qf(mS{iMB+v#O31BSKlK|@8 zuvef?#!bFVMOQf57x`KKF^|xV^k(CQR)b6?h7$?)^;sFAn4pu+<*T`kVV#)ugUnO0 zY!z*nfj>mQaQd22<-UfK8fnu0xRtR(-Z3|KT)VnV zY~bELIKf761&Y^25kq*aT)yp`6CWwhxG0u6J4TnHgZl?rrQS4&TZAxC`=VEbs5xnG z8K9gOuVm{a@S1IH4EM9}axp;2K0)oOcYQxd&PamrY-{ja2y6qTsv1t?+Ow6>7@Q9I z!a-@MPf89TL;zBaMZ+ejHrmtGV-i}5B>zBh99szg%>sf}gkdny@7pUjT;wvcgWu6_ zpk{mb@vG>2p7VyGDf=1*6-+Z75}sQ-?FEX+Xj-dRd1-E1lF^Hu6n5bdG&j-$BhqQ3 zF46v+ik8N=wtk~6twWz1e+Yl;H;}%6#ZApmStC}a=pauCzz}~=K04YJpzSPgImi-} zl&AQ(Ul-_R0zxP#N&b=cVIc+C zF&6MPKBVAm$&W%%k&(1&O*ksJ+w|k~?tU9OHL^iBz9^GXE1rc?04j<9rnw^XWGBUta>Sm;l#{5{MG~ zO3uE0D?|7eV!OJT#=9Ei{|^y+1J#%nFsxFy{zb3ant@el)qCeh^q77d%|kYXC6D)7 z>>Cy*ycC14dyq`lKMHso=(v^6Z}KLV<%1)Mc$gRE^Tr9fK|l%8G!Ghu$%-&N?pTNL zVHMrvqXM3q6i#(;5;T<>nG%0hcrzDOijPq%^ncwPeH^~S_TB&wtXgXf7g zV0C~ewpU)K%U#*xmb_?Ke=8TuVtq~>L%n^HM9FEAQVotojv5FXfW5J2OR*O9jdl(j zp-7y^t#ls4mNz1pKM2?T?6u4BfnQfG3=wO05Aiz9%97?oJB zF{%7>mArqhlE2N%Om_khyC%BIz)SJXS#n6eURI;+~Wj zZ^8(#M)U3B*NBoA^xRDZvTN@x8QH@`3+rDdEH2imhC#-Y2<3J0ngaoh?6-xsG|JhR z&ynYGIX^J3WH44~gvYl}BKNJ8#F6d(IUg~tEAHF50w*oM;1N+CCXmn6(^ylYAv|I# z_0b|35n!m*Lle0QV#=Z)EKa<$I7LP8=S&bLHI5SL9{34Vd*$dUuwMqXT-Nw<G6lc(jj7dj_dp-;mpq$t1g zg(+&1v?eMctN|bRTG$>%seH6+(L`Els<#Ma_v+hYeU;iv#ztANrCOOraZho=C0@A2 zgqRyeCSx^jJnHHuMa~EXhy<3npd@?RsnHRCkCF)ao%x<@q<9UwrGhyU{#_G6X3FA~ z1Ho&)T?fU23lEC7qk)JDk{w_1UAye8zU8@l`Z+Riw+K(etuOZ_o#pYNP$$QcX}-SM z=!IqC7a(4GD}PnmQh?G=XPGP!42TxDFlotS>7W3lgp`%kkXpVtWkjuYh2kyPjP)DxN= z#mSf{w{TA+F`4QWBFV^ZYA#_4z@O0^i^blrKC>gTO5KH?-yUf6u)YEnDZpQ=U-*Bg zT04!h{M(mQMNUl0ob@*Xt_wuQdrSMGCtvPcM*<3k=}A5EB&5A3_6HxqZqg@z-S1D^ z=m2hsuH3kK%fCw1n5u)2^MkzqgLMtOF2Dft8hzXU>h)BJcKB?1=fan@nNomFD*T5~ zOJ9+h-jtwFVzm_vp%C|)wuZ;NMm`7N3DB=lS+8koI1?R~c^lI;`zvH0#5>CCF_c`D z2)CI3lRXX$_?tX7qv71x%n=Fa@)r&o6T<~-1uQQEsDP0HPw{8_x9;f|dF>8}n`}kC zxX$EPJOBXx7ncil(Lu2$K{U<5vZg<&*`6;FyQ`gvchKfvY>@Cl*e!Oz2@j_F^jgfS z27OdX$ChO6AyoxL|M@r8&3MQ2Cl7NzX)LGmZ*W(JGIcmD#e_SO@BzBbDREJ{H?_^w z%pfS&&MHkb?ijEVFAZ028FwCtMzg%d|6PhU%oX$$vdn*~f(oSE>kM$X3pT1foaQu_HR*QET+F?u-*l#q zv_te|*;Z-4{lR+jo}4E0j`Moh{rKkGZEWde^_#W$w^JlZYh9fQ>-}mU4{#FO-*T6l zU7hTP52gRy0F?!BN&JLoRHwf%z8{$|Egh&1Xs118uT%5Ya(UiTj8{(9yYrmQOud7& zQ@Jzj2-HbIx5kpybgoBqLit&*x+q9mnN#=2g)bs5l?j0d-=d3s-w7xB1{VGbvqOuY zm1V2^wwiS6m)dHDd12X=h_xH>3ZU68n}npxuB@U#Yl>R^Z0ywEU{>KbCG zW+ukr8>J0pF06X&_Rfk8qETgG(^^AVPNv#iIJB$ygSqU8+onWlN=_%fM3OQK|AZ!j zwI*-UY?wojYy^c(7Es;P{1={Dnh`_E_lQzHL{@Vd_D7Nvx(c720@wNilj&(N?T;-7 z@%d$Km}Te6Xm8`gG28NZf3AFbtgd3k@GDIfRQ~TV9#GK-8i3`~JX2Jvx~+El^FBFX z9!Fyg?$ZkOKhXGahZi*0l;rR|Ph|av)@HbNN>Or~i^)CSkJg;|0|S56yyE7el~a-t zGz*in;Y@ryDTClWKP?`e>1T2Ixk1&js(v(7JPNr<*X@fwCC`=9{s8{W)WZjm$Gl=z z?PwCXAVvWq?Rg)8J%wdRF(8E_*8U=e%_$!!^;$L3Bt3rz_+2&JOCL&pTQ^@JhqqIx zGN}h-rNlG~_QW05R5~ zn?i$JurQ19t2m#D_}CO*w@(;s&M$@?C*EVYJF#St==DvS!38r$6@K4gse0_u4er&L zST5l?-kl#*&(1^?{8|c4^k~`xOnxVfL1n!ljeL#H=|=5(8=Wkol!R57!TDJD_@HiH2NUDCa4hSlMo7&11QAjngWjWb;5(8+)J|7y${0r z;&}N{`Vi7#7u@@?v?@S-v0#?Hh$s%w-|l-h zPYK6={>=|FZ&6t*pDl(n>~_R^YkZO@k<2N20DJ2}m2M0mfVEzv6d8nmrqE2O=7HRcTvM@(xO6Ew#J{ zj8C+sn~w)PK!lpIUV?Goh%AVdW_{nkYJ~$Z|7oXqGOVSy)!e>T;^^s&UkTR55!KMe zOrsv;@Q2oj9fs?E5Yc-B@WG+ZoiU>X~ER95B)g_~}jR{6_Q zDLs25fiCcgz!BC>y%^+zjsZkx`mS)(VSdWpEC&H;6QEl~)vgP7rPN zlsUCP>Z_zf&JT1g0y->a07S)OiJZechQR3n3AoROA>hNSXEb%cG1J3wm{E)MATO+-(y8eNdzrufQ`6Q1ky|)`2}Z(0Q!=ZqUKf^teFL9P9^j86$WVh zJj;sh6{(OoRL7TfpLsdO*uFFxwPgypD42G+;p8Vaa9r-2*P`G1?)*~S_uoczc?@L1A#Vv}B>jxIn@sY8LrC%FVZl0hBg}VqxQcu9pY>fX@{Nw-Z8RtElt6 zH~Rta3&-=!D8;nK)=}V=tGqRi0@!a&Spw4|(U;IW0KtQulbYFna24~G>k}Y6ICQa| zpUWH1N20XN*x}AYtJ1c}yz4-#Y1(xW_D*+cj;0+eVB)dYl;Mr|M4u!y=VwvdWQ^eu zualf|Io$J;L=%uVl+k%Z7X;}^rty9}=j z$$!~iW_X7IQp<68yTE%njihz$-u4eVB?$Vi?B;@?1fw_S4=mXeUDHoymp_|3!|Q(zCf>bZDYln62L~{D_8>ooE_YX>N??YNOjS z4hJlJ7Vz^mbjoSFoRzdBXFunoV7Kf4oba3$m8tSMHryN8DSe**F zmtK=s3o(mC`2H1@5Uj}-=}-boxN~YlV@RLEYTHdSrf>=c^<7_+bY$!p0-geU2?*W& zShY>gjIJU}cRNs+RI+(otz!CL_lhYm3qWhQrvnLq1|Xsxo3gWm-UjY(XTfTc>p(>Y zl5l8`1`TA1XhSw*ru{L!rsT`3Nh#nm+hl@}!k{nupfmNHNQN4LNpm0NMa2B;a_9yPSaBylR+f3<24pSay5mpvP4&aE-7y3vtdRK_66E1t&DM%4m_FJ}w{TC?^6_I=Q*pn0hn; z^q;0Y>$<1}bbZ7=d8%c_75hwVAI!`PG!#&>-=nylaz-Du&rROwgWKw{+XOcDx0EoOv4PZ-6>Lv)3}|lyQKXc-HBKbCfUFqfFPWv;88> z(|zcs6g~RBsGSch?z~$G+d{-wc*T~GKx~6>y6cuZ+{6Vqdz~cvSwJ=~@`TTB7)W~F zHH5(^_tu;aD9;ilYs{`(PbGQH^E+GlxQ9Sq6Z{cq2NKcZri0rbyUaOwOBxZ|P%wl- zhYll>cy?#^RogaggtV^mHQTSm_AA?uJR$%A0JLMcM>k1}pP8r>dhg1^F|cu!gUvSgPQP*NCrzP*{dkA`Pj~R8k=s+$44P_IM+%45X z@&aYeeu|EzVeZzOBb+_lU^;7%{iSsWjcr&qn;1?qA9u(JoqsW7S#YzP6vMrS0t@;% zc^fX+xNh5Ao$Nd3lWn%h{!x;*T=1F9#Y5%k>0J_f9%`?7@L+86@N!rM)2w3&UywKk z^vTj}&ER-ccHR;}xf{?29&Hf6r4vdRa+A-m9`)J`w$=*bpxICu?G-FJfxB z;E`y*h@o4sA#=${pZKwxe0^!0)eXM7a4O-+KvBGR9~)+r<>yc z@^1Yc{Qp!{i;NI4FWme^)gA8z8U+8Ltj^W*%aHh`o#2xo`QNRJ7s`2o7{9LK6bQrr zt99Xp6HM}}YTKCnKUx>;(-)6rKmOfRxmJq%tTNt!f`9)=!+{amWkItjyWabhhe-?d z*RN*=MB=oVznW6?FUk4<_xs>~`Op6|^52&GKQ;1lIwa6GB*82iAMY1Ozj3=y;}+;t zXdGop8&aJZxvumoGar8b$$5bHOh}aiLAjm^;fwcABxwHrgWmb7?%X>!qmXs$JU8<& zQr^?HSB3j#+P_aoKPqT0q2SAh#UUEi99Oi4Qa(IvCdDAH{jtH_xH!j z8?pFV6WB1(7bvfPJ>#0PEdsE)t$SB>TE}gPEjZ?^mlYdnz*b~|F3gh-nz7E8TS`~< zna5YV9@2r00~{#I!jEMx)S8p_c4i-6zIoM=EsmL)?e&>L96r)m_QtKPX<}A~qYC|! zS!LrYvw`C(b%#TXa6L78DW#b?ea*hLIvAEtUt!U(xG6YE!;4IF1-X#meV7sq+Jk;5-dC4fcWHxyNoC2Yq z(ML;qv5gOlK+);_^77gx#CN(N2l=T9u(KEkk2~Ov)^}E;HGmFKey>R_8a3{YwMV?W zCK!3Bz`%s2ciuht)%nV!-a0^T_B3#ukEbnrdTu2hUlE-YX7I~ppI?5?*ji{bgdlU`m8*KS-tIsFnHt z#e1(=vEE&5N8R^LCmcB5_fB+)6;M*UM@FKWV=LnQ5-K?_PuwFajP@$Nm$! zfknDk1ilK)8`6DQcU}mL5vv002cFIsd}Tws`!HojVNRP@*S@;lrZOWt12S>9!meS_ zT0%5t;r@Sq%8MWf?YhUe6%Wb4WT>kukcQ!th+pF^^Hm4v;CYasNX&z=XYBrm5g(H8*S6 z;y^JAYq(p+|7C-Oqd4tbB5)F8KDFfQD3(*+JV^5EYNH827R_Nw-}b$=WcaVAm^S^tF-&u!(lg;- zzQNn|{~k})?-DMah8c}Q49cn=$;aht?@oQ*y_5V0s_7G;9>erM7LPah zQ=Y8MZ=4bKL%A)t?qdnZebYcW!RY<751W^sTcL zd>bV~P25cCu)e}*KJ~h>E_CrgLJ+Y+#=PuW zb56~L-1!KTUa|hfBL-=ugT4wO;m}>IGWuA&Lf6~BM{e(>cy|kNqrzRAf>5}$ zl959w^mXYrw<5Cm^64(9>P)dm?X8e!t=;d z(;yWYCO6%m$R^v>0g@}uL|T~Wtu@k)AiVD;kG+_H-uuD4uM)O~(uw^+A|4-K5DN&c zkH4t9PNC?$`Tp z_>O7$xf0#aqJYPlmV0NPma>+kMsEYngVk?}1VS)lhx) zR1d6&eQ)1gRv2$u@K_Vm0TXGte8+5WL@JBUr_!v+Y6M0Py5Q0~J{5hDr~}2Ubx$Xt z2FEtey>%00ZbT0`vB75*AKJ)m;~hI&pzLg`>bYxc1-GNPgJTnc$}vUl{>uq4Cb0E| zl*q>QDleFtlGpJ7SEmaV$)dN((vLsmb+`<@fFk(|0c#h+_T=bQ)Uf$`Rs$l!A6 z&pB5d0W7uLrzFJt?}k=NOWx`g1ZLy*gr0-H>G~2)^K?(crR4{{PXr`jGP*4|zKiZw zkIBcgi>+fCD2_KocwvV&?Y-`11jZRVQ+44pt)C}9;PD_ZGz^Zf@958-SPWa0U%gqy zGm#LX-Z(Q=86@#{a4=ZrS&Y=ftdX4qMANuwD*W1@6G1#pyp}!XYz1XsNeLrrd~DWq{{R~uS5uX zB|}a6?k*MuMbcv0+8R032^@33ehR(G5;Z0HoG+OcU)v8r2y1T4mjg}EI5(~D5TEw8 zu-0$0YK5`xzq@NgdH&_}m|JaQgQOzCD5QPrnH2njhR@NhE%#-TuEp`riIfh?msdvv z-iTv+nX_6FSw$fI;`jF68F&azi*Awb-VK^5=H~&|(|Uit2Bk#HAsTVOEurU{KhMeq zANDvN&KLX)SSzE}G$!G@3OTQFt<;1a>^FCZdFHW)#0%rk34K~f#P@rG{y^iYVAiu81vd0xr&H=5rrY?dset ziN@Ocbk;aOPay;xU+iMmMBr+R!88~*tQ2(>qESsgslBB58A+C=W1$`NjJX#5o!_xK z+V0f*2N+Os$8w=3pZFg=ddSJiA!MokY7-b&zMl7l?adeA$qG!Oe?W#l`E}-i)f*Gj zx$_Icc{MrW*t=7knPb>T;)uGf+9Td$On7PBEj&}-muH~~!A!08IuSpZj0l)%_5thS zTwbqg)ZN5eY40np^A_}x4M5b(zjdo(o7PO=E?zcS-|BD&0<@QVV;t3N!dfY(Aq&r@ zMgpQY4b?1su|DP3F3q*BETx&GdCE~{$=Q%;s_u5g@(B;aZ!JvRKt+gc!N>dNN0tm11Eub}LsUITA-^(ZOl6L+1A2VbhX^0IlF zts^RG7tih+4=p9^*4C_N(FnyT|KQt6wC3zhLTz`sa}Pz5v%4Gr070Jw(H1k<-28rb zz*@Q0hqJX%pchy`ctz2O6vtLRTg1$afVW+CctKuICH0Qwgs_mFy1~yOQZ{%c)AJ>L zBr^}H(}dks4%BaDpFA-O96}Xm3-mkR3AmjNPCUeGls8EU29T4~a(#RJi}x~pQ45AW z7p)8AO2+5XV5B9KY!Rn%OyAu_F@F4Fsy{z019EKj-5fOf+yX&Ug8VM-Yef<5c_;If z-CNA1CXxwKzJ`&g89Zj#IumGrRc)LOxA~tF2vKjCBl_$l8<`n83*?5&h=h0t32TB`sSN#jkUVsu=vDC=!gX;Y0!&e3t z1Iw>Jq4-;fLRz~&DcNzFQ?~Pl+S_zKI*}c~cHg-5SIoKLbMH4W1%E8@>Pu$Os|CE5 zT{hZnI31f%tyee`Rfe_-N?$kv&(TSrMB=J8re6$Jmw9IP-eLE6rY7_TBjaJuet&9#x&DTyyh!t!QVoST;BJ zrj+X}c(0@QxjscC>(ec>dwpM}aItmF!XcDuLHvO~qT(>hAAM6PG?$Ud4-B%Fk|ukp z?BtLq<0bz7Ty*poi1&gHfKn-ItSZ^!a&M0pD-kyL?>629Qi8tXl4%iIuhO?xB)TbL z7k$N2F$no{Q_H~PnOTq<)3&P{@3tIj8F&|;EIq8kIOblAj6ep7=bQ})WNUzf5^OU} zEF!ZXwX_TpO7}cgFd?a14FGyn&Bm%Y1Ue^AXqSa01D?6-PAjL#+&ge=00v{+mq@TkBgP2n2~+N$PNo=_>4f>ShSZ zm2~Me2x}y6?5a~Tc$yV1rTs5f%|t?kjsuiMLT_bab}GW1?ZnKSvyJa9&fi;Q^NjqQ zxT7)};Nm7gEt(e)H~IXu0Ag1WjQ)%uax82U*Di4X5NGdlr;Ew1p6MUXu*LE|=QK>e z%2q7dA?ZdGs&N`de&90iDuOCwjAT;g6&_K-lz82H8I~p z3@G=$?9XKOe>42HkzlZZMJ|wvlG|Z!dqZJFA$(2n(~79C)Gz&o3NTRcAGfd+zZCrP zi={;MiwOSny&~!{?7RQOftz;+<^Ho>(Cmb~E%Vp%LFn+QmHyB7Ug@aTbz}XrU82|1 z9slE?1LzjOE9qGO0qZwVmzn?jy$-6Y4>*2V#7*RXHw!Gr!%P0vSor+k?;TLh4r%xY zaNI!se+K?DJN(}>aJ(O^Ws0m&_-eI6XU*_Y;$tc^yMM0p33o}#Fhoi_bUKX@wI=jf z6QB7CF4mwUqLJjBIyP{eTMvdC{T3DvgDDq{A7@FjVIk)QHpSev|J?HU(rZ!*r~$xK zUJv9+=GQh39azk@CS{?&dsH3ce*#supfgCRS#|m0Cri348Ujj*pM43#O zU%Tb9C6GL+yM()!d=?=4BV_cwxNfT8OkJuC+s9|U!Y}ZN>ZN}uU`=bsG-D}XE7pt6 z&-xLFXH($0x`Z@>;$P*GPhE~BfO1RA>=Q*~trXs+$A;PkC8Hd$SD@k%6dDhb6G z4$+*u?Qljj-YmF&9D8TbxS$f>D>n2Ur2t5Oid-m<^wk|W|DfRG6MkpuI9|%Kn}Z9a zKJJ|O8CEZz@LSU!K`pqYZ-LRT<$KQ}5*5)_`0_pmE#HJWh>{cvRFdvE2r#+1E5w*3=8XT0-eTBKvce zJ(d`hRT_k$xp$lG(3%k?{Q=5B8u;J3 zFh>;2pbA1e!cf<(+hofGE;$(}Ee4nVtfjesh{pk4a)c18SzX0JcgrfK!647B#0%rf z3H{%P3^>8)*h|Nrvg2<|7;fHyr-FJ%)P-+#bWl z970)HhU@_PQ3bre<~0_`D|i36m4|R;;fm0nqWeCiKeDC)TzK^Y8@WuwR)E`mOe24vzL@F10q%VO%IMlWD=@$dMxS{0&0ln$f@xz`-#4l&b_>ZC zqJ3nIJyaibMv3wb#AKW;RBe-Jf&cc!M<%$FLj7)Y+&_rT*l`YH$H0FrC;@ z1L6{J8^bCXy-0||OKBY8v9AUC*#qsXyb%gYzCqt4HcZgRNqhJZ!p`v|(;R$;pyH z$XB<}<-YWv$w2jT&%*zwuC>n#PoOX&<)z}5G=9ez!@c3s_CTZKiysAwm4uyF)dXFM z#bO8;iUBGkPv0MHAJln~m*e0`C|^bZbpQqv!CfqR@?h9nVrJ$u<+1Rz5fOMR*d+Tg z8!%z@UDyy@zOyLa7{$$p6w#aEA}30rqD;~T185nH`Pxne(gpl$q<^O-Iy>(dBY19TR2q+zbq?AKRi8Rt7C9Qyzlypcaok|M~At;>!l1eKn zA}Epq5)#r#Nh=}W8ChNTeRtpQ`{TQIu4}JlW_~mC{GR8WbKmE_&+7IJjeTm!=pqZD z83R9d4VI~y-rZFXsj+ZeP$`2wU`=dCMcOy6k&Q{b`WYzfVW7e_lH1($dT$V3aG!!f zg%*Ur4;}>{84?iqgK38vM9hAUh;D~VdDkn%{7@E^|7r91UL2N;I%J>e=Yo%$DnRc~ zDx<$LU1{n7Z;|3KqwZpM2rF9~em;9Ex(-QC_KRrWTxM+T-6##-)N#}}qo>U}3M$eL zU}Am}Z+Dj3HPWC2^oLd~uz|*|ik1<&F6uAzD47VF7SLfvZ9<&H!b^>p}%7 z`Q#^<>67j-IK+rY(pDkBTT30B5lft79i6DWNo%G5>zGiX{bd%w-99!8m^s5-HZ6l` zQbkE^5shj%&+$j6Wq&Z~AVHld%v@T*3O-RTSH?_c2MlrK<%#iwa48w68*(|9EjvsiENsvVl+4=IL*61u}ER_K3rD>LP zDh{y>5~se0S)6dLp!(->Rb>gyEEt_)3MlWwbrP-k z1Tz@mv@Je?H1ai~Ooybb4dK=vrq6N+xT)<@h-9Ar2T`KewT7oT0f;2F?3uw z-ady;15-f6_z+;08x^?q-hK;TvAuY>mwzMIQ6Uf>s1azA=WX*8(APDcVamzQK`TfE z<4&MV@#GsyI8&FkbgCjj{WJ#ls0@?RdPfR+Ey)+E{uTP5z-QFOz8cAMKw9?dOvqm{ z_!HXBbf+JiY^CpUl{$PR>289r2$ucT!^t>D)$;oH4D^hKW;Y>+y%Bww^H!46A&3Y6 z4#pk3yT$geOTMk|U^zwPyvMvnT-5Py-|qaUHglAYo|EoaN59lq3rp62Iy^uxEiva! z$*RQ`rqkuzpF8XRiQ#3KOmnnM9cu`s!Sdciz?r&NN^t$T1UfV ztzi)R^*i{bRw=-cfhzr$Foe~}=IV$z16N+ufkTh)LL3-{JGAX7MZDylNxkBD)R?R< zsYi&F=ViuSIhQ_aMO+liJ?S^dMHs&_7xLE_7>JW$2xIEoURvi_v22{Kgx`AkB6*D5 zMn?4$mkuw=`cXTNd54_~6s@*H>j^biKnk4tZO}+jGL7ts`<_ckN+VnD(M%_p$iO}c z{x%QlY_Q0|+L1CzfddNB8(VpMnWILgi;H06g!-{b{$dFBZ3LQ!^TEH4bTRlOdXI*$ z=at)2CaHl*_KhH!2Tf#t2U9mhEwU6 z@xi(iCwVUITGrjdlh%p2@0=9DA`_WGkw5HX?&#*SFYVMGW;RLn_lkhnxWRVjc};K; ze)4?;z3Plv?B$u``7bxVn304)eLKR0CWyO=oc1JY`xU#Di6M56W(yooW~qTQaW28& zg2_xx!0SwJ7x=}dQgQZYbqCKo>J>A?WJU0O5T4X1C;k{$peEOC?R0A*%n=gDdH%bt z8A6AQ401UMADCgL0@N${;0DbBf4V-*!b-Y(#O`))#bRoxC<$tk!R@1{aSIhfb8?uy z0MorH?#MbkaeAUmFrz9H_P9PD1UHbZHmWUqtZFl85;2aC2gr`K%L-rC`eX{tyjF3+ zM>+OBZ3ey|BM(mshFNNeuDfPp-(d{hkQXsI6nIeM)(Xh<2n-fH7Hz;XyUxOFd$JNWCq?Z-SVzIq zWu9?}UN8z1%60Qd@x$P>`lQ^u`xB^%4EACqF?zI+`{Qn)YDQl-sp+oj4;@}I)0<$r z{g$!^<5VD~D4m=<3sd>^c^hAzQIRe!8*_Ym^o50io%><#2#z=tzW?n{rV zNg6dD3%9?EMe!jjrNLjfr{eqLYC;>B_oqnpI#z>&e}f1B4rE}@x# z4?FK?-BEx^8c-UnS8QulpRhW2jB%_)!!h$x$ivgDbN@@xO@5@*G}|P0I4r|en-5M# zWjzAw&c`slU}NcSH4NlDct-%mu|m-b&pR-OjHWpzS2FMkbhk+Km2eEnsN$X)p}{$> z<+aNb z$Nugohx3cnHBiR=r2-z)huJ1D#qCpjffSUGB{3bab}-INB?as`7Z@DR^V4Hq!TZFw z*0~{L7frFZ=wn0emU`wW>N9l`1T82qtg-3X51t~J4tP4eD|5KkV6dy2s~Kxw1hAzcKcUPwl@k8Vbp_WY5RT zmG#ZjAz@M_5j6}~S;W^9y?D`-{21gJFo6N#{lLt?#sDw-lX?{d8Q<- zKxXArRB}aifjIYSWd4F;h}+s??g0uQL`4g7-Y5%6IZGI&{_5HB{VI<-foP7ljj%NA zW-$Jf9wNGp^WS8WYRtpi=)dSX5T{-&{6F-s1I}lg|EB+b&PV+(d9QC?yOPjvJ`Y-Z zAbk68Nvn-;#N?m!SBmt1(rS%b=1eJn)5M_FwE1r-uw`Y6+}FSRMJLcz8U9m*JHhsx zMDlb4wj-JN>o(dz6Y> zc+vece|(968z3i!fBxVcQDdZ!XS5V0l>bwI#P;9!~CTXoBfzrBgsM$ zYY#e+Lv_GX_0#`@KI<`zl{U8tf|w$)`y@yFm}HBKeH13h#1rb)`RuPj+gSo4j02el>w#h%T+ftf4; zKyXy)5<+*CX$w(5f-q+wK<%8GYH-isp>i42b2Fz#*s+Ir+7B;RzhTtn)g~1A z$kIY>hCa_oLt!nF%MFWNBVUuPXD2Fmi{(udg@zVs-I+1x%N=3%&18RVM9?P}$2LKX z1pkc9KFl+NX=ln)E-KYmopWNUU#pEO=iA~^+g`U zJda~V4;WwbcU>q4HUeu!N`xKp?N?M30InufUn5bs=bC!sM@HQbN-x39XH(T!<@B)@ z2BR!IxPThWiFR4`t&!W!4?xBOFND~3)pDAWe&VoYak4T<<}=$Z&V31b3ow1eMT{2c zJ4tZ{Go`($^m#Qo#5ra1bNDk(EF?@~ zc$!acVGNJ{q}%w%U}7;CLSVuPm(qzD{m$X@a%OJZT0) zBjC*hh0Q1pD^V+7<(3>Id8oUr1i^)EvDEYZmll98ulR`rOncA+)n=U<1t;2%fJsrx#l z@_R=|4_96GLNI21DUGpi`tg-9 zh69nBIMmS>IX=3rdTnURT_?XBHEpY9Y1yk`8_1Z9CqQ%pUGl`~69Kb`jB_MIyM99g zo~?IQLjvB?jb16Oh%};1N_JBOV?%$#k&hA4+wXjX1y3@cJC2AQ(r4Fo8+S#izN;kT zv$UssmAmof=H`2GX+g%@rfow5bNBq(LR<+GQn2(J#Ht;*xkIn}@K$+W-ouc4{4C^+ z&IEPJi*zksnule*R&4oxi-a;Rn@&}ntS7lcPsX{$l`paHKk<-q^>7@;Nn-cn#_R0N zo``NSwO?I4RpKHsgmaGg3I5;#bK>scWRO=lL$k|!v`uqe)n|`A_ZS;L8$Ua=GSIuB zTg1sw|H|oGtmPbo$9||dvm(Atc9OU;MS?Q-(r3(P_FGDNPA}Oi5gwae?-~}#@4>{O zCkiQADkpu~Ijv+rzd!_A@GesIxWv7ue0Z*OpR0Ox-FDW{vXk0%28@(TX zcI}UTOvaUBT5wJbFVu7|X_{U|c;-EVEciZ37 zrB=1l0&S}8#+f97v`AUFzHziThadI!8U=pCpEd7VIds{0qsbRvxj*+|CMSLCO_@RE zKIi=FJ1LJQZUe%Y3~FLc$yy8$#~=IbYkT_(%Q8OY=>GiIySF(AR}Liwc^IeqJfsW~ z8(>|lrqhL48W_Ydf>~IDF}ej$UG?v~>UqY_i21BrUN?_zpuV2hA1}(RYr+;^Bu&JeW z)*yFm5w0Em2@L-e@WVS~B+nys&zRyxRLZf;ehswhbi|3MoLw7POZmpkF?jZ@t(k7u z5gx9MrCjvKJFBNWiCe>PZ^*7oh~Qwi%bmmPARCR|J9|dLfo(_fV_BuyE8hKfIGpBm zF{KXS%AB*9#4F9lx+P}IU!s}?pIMssi1p-U!uFZw|3GrG7aB8l<>!WerlaH5uG=p?Dw6bX`S>B(Z ztGi7|7klOIHmwnnQ$>&4&Sh7L$oKc`TH$;Y-^{7IqJZ`@TA|UG^<(f!2X2(`&e&Zs zrG7vB!%|lr_Z0UhcBcou5v-~P?ToM&G!*DPdek*O@E8#>IR!om&R}&3GqW?^DDj%J z{GKg68hzw7|25sa6vMa9CiIhIxLlu@`jbriHt{>EpA-~Ik$y;*3EK7IdLe`Yr}IUQ z-6iB?g3nQiF7_oXX21B5
mRp}-y4xUlXM+lwT3!zFw6s8Ir_F8Ek-(Jh0Z$02> zWw7jYWRy|=#A?T(kTdl#S9r$2sQx@op(byC39btMP^Qn(e!XQwYe)_;1+4_1V7k_l zmxGk|$Y9%U=Y_X;L)r)Ez3}@HC8zOz?~W_5J0#WCy9@n_&6aH>)m`B+KQ!;Ly^oB% zhJ_)w|5(b>_JfiL9Nh=q8S^?RHAp$qqwH+t3C^_U$AzbS&)*Qr(K^}U%#CqnWqeo1 zi0B^U(aYULliCLx;uT^dejj}+hp|aA=9hiV>Re)RNZxhZtN`w~YUCCrzv zzshAVeQOpsB+ZK;|8r7j&Q@ze!n4fU@BEp}*hjgivDL{o(9SfDyXh#=CYmbZk`%b# zvL+BtJ&&W@$2Zg2-|z3EDUBXHd$)c=b$_9eBB5|$T|{l-buJ=!iaeCr=8%zTpqb%! z44{XIc#%VAG3C>0lAMibdGu7@JH)$#ZhEJDcPojnf^9K=7t$Edkg;%?W3?VFup;u8 zlbKxzuKrjM@(#P&TYyF;rA*ud^Ep`Vk+0f6b*(;kR8->;e>HJ5oK`wZ%Uz1YHXS*y z#_9Zx`Eb@}LyJ|87s2ZEU_Fo`S<_!EhphUoOyGRlNt*@>UN#?%-j#<%8j9R2bC^zx zs2?$N(&FnYvG4k_x97S3y2-9ltcbTM8r!Ble$|7@0$pV{O{k(j<6E*q^zf4XBLhom)*b)R>(S{iCL+3ilW-)u@vTAfE;uipLi-8KFEI3#6b10PehlktYb=z4c-`IiTS(sgR|c^2+g z4&$@+*QY2VDoKl!OYKD5d#hjhn(>!tXvutNj(}jCUD;#YRW-M|D!hGXVtDPglbzj) z(Ab>~iEuYjS5Jt;0z%tbBMDvY_pfH>`lmDtE+3X#?r7ZDFSx%IzL1kLv%JU2X{N8| zd&54Lk_3|XDP}*{va7V>9VL<(WoG&x5hdyS)YELcr zsN5`e6Nj46E$v&M*d{L}NlzO>&+rP}K+;L(yj3zIBK>G1Rd$Eyn>TIVTC>`S1k))M zD(4J7gsXU7A!V#B=q$&#KqywYk(Q%2LL!MV9HP z8{|3ZjvkP|zHLUODjRG81raZYJULqd0YLyN{6bO0!GdPU5CGXz+B3+pSNXBldO+|G z%N{$I7$5W7UmElTwx}D;3r~v3vI}mR-&Qq<4pnxe`F@&gS!}boIOB0qObKMs7h;Yt z^XQBzBTE}Bbra=0B?U(0dM<5*9mJ~=2H&~Cc|LtI96@-U!%DOtDm3q^?V(!-T)dq~ zf0NorGj&zH?2VE_KDzxd2V)f8zh}X#v8%C{Rk4lmgcf@!zSjhPl+rTA%WxTV zF*6dktk+FTFLnthEUg`Pz6?7>}-b2crSSoTi z1^LLmse%XtgAW;`$G1ha6oVcynb2tKvSf5F6Eh&x;!&BfoCm2{d~D z(~=5hpBSXn}hzx2@J@zx^Llfx(&#TthSf?{1eHs zFWMTRG&6YHVyak)7Q)nu+x5pa%hb_{NH}gPAhkbfFAFt1Gc|t5O^Hr`Td%|*@mHim zT{j{A#C?noauN<}`9POnF&kB9L*^2KD@ZE*{Ol9x$A3G4q0NtW8e99$E;m=yea&Bg z{8Ia-kZU+V;lo2htuv$%A<_0$I_$n*I@OdN`WS)awkFrtI`d1P2sI_z6Ge z!g!LJ{iah%!n2g7S~zfIv%B}Qv^}tX=jap1U#pug0P{Y^<&dLmBLY!GUjr#B9X?1q z&Gh4K1};4IfQRmiFE056&o^f7&~HSjgYwa9PG2!=&LqXYqvw-%3;SCB?59B&BhJ3E zx#462BZyc|x4-xvD*IjAJhAwX#gd%#U@Y@TW~?o{L9vb3L0KG>_BbRW!(TcOl9J*u z3k9SQaw1aBbv4?Jz(?0-8MJ2ZetgSKk3gCQkp6k20_J1=$C30GMbpS-?+GWH)V{~j zS?cYGXj+#=UF*s14|{U`U1bL5((tv12fP+q84wwp0pcb74!iAan|jZdKc@s&k)F)C!x_kR!qaI5J$ym;}u z+33PUh#-KHy26(1vvt1g)Ya2V{MOp5b(?t4J)~Yo)9Mk?OUMLX_=tBS)oV|xc;3w%=ebw6H#;GA58FLq>K&5yFe#ir z4vl}44f9`hzHh9c9`3B#sXoP?ZlvUM)q>5}iBae2-WCQUCA%;*|MAiO{#0m@NMn^y z9;k{_<6CgE_#4?c1|dK+x=baP>FVnisdN_Ty+~(38~OAg)veCA^4825@6dnw75@Ir zaH)*CCA{%4c0q)TYv;RuY4`;X4!UGEGnK{i8m@?IG@t%}^&ICkwgQ#$$v@ZsL_jNX z&uQ!)vmcvZZZ1*0q#`0cM?^X%yyIbXC~nSOp;U>?c+K+$*VO94L)Rg%y(iOOnc_EI zO9$S$(NK>u8qyvua%&t#E09ZTrLlf@Q z6D7gB5j$w){O4rY)Gi=Shp7TcE%HIv$+41D$Oy$G`6GJ4eNsk$AsslFo!GlnZyLMl zF6GVBUM6(bh5@a`In^v3Qq_zg&DE}w30D%5T_sLR=a#t zbAK5)-7e4b)}w~86M5oRQ>&txN4|)W>wLkuGXgNM? zU%k)tkdcV8VZ-GyX{9>uwTaM)&_2X9@%0Iz%M!#uFI7&qly?l@ro| z*T)VE8NLJw{7U2}(BYyz*pPr2Kpu+vj}vIe@6v@92SRsXOf8!HGsmA#HQCUgU5%(v$SZ(!ub2U zut*!DF$6TH*jJp*)Qfv=bFyXn&V|rQ(wrHACo=-fmLI7(F-4TcitU#A*gd2^`Apu+ zJyV8vJwHtTT;?}H*1jTbB_6|KvO&?8mOI4Up$i`bJmsE!i)Ajsz>l!a=#K6Q1`$qUNno+axotApJ+K{jYO)`O>y>sr??~tDs=8(6 z(D3=8+gZcys5#tfIGP4mtQv)-D`$%_e$cR+((1?rSbf>D7ZImwyu<2OB*w>*M{TtT<=o z(Q%5TWJkKgL|?*=6kdAPk#s5#srOGedc?lnz#J2vwKH}X-9tD${R}1(Upm#yc+JbX z!{G@rYH1D-3_dCKWMHYM{eaK*EWs5f`Xl8g1pHgt+H1{3`oC{%$bU%w8cK6>#^*qb zWnx8$p3A%cMi+{nPn+8jsDL8qMeGZQI3?|P@ z&*-TehhadZ*-Af%Kd_Znv7utk{aUb)V_-xi=&;nrMsgksIz}!rdLixDh76g`#hUc` z0{NULSe*=c!{(h08M_whIl`|h&<3VNcJEESVvvxt3B=(+Sg-E&^hrrcKOwwnA;Sha0GZ`N=(U!|_q` z_2)ZUtBd=Am&-GXi5 z-A?quhmOnL6z_+-!W`9=Os0v0XM5r?=iz@A`L2MH_E8$%)WSAI(T-D5Es1zx$xdX#{4GB74TMoju%%DNH}mc* zklZN7=6@@(aUzU_1d5>Dls&4YHHw!uUHPx5wkpsqwUeXvhm-L-dgV;fwq9avG02O= zjC$?`j>F$tb02;tzI@R`u(+mvzW-w^vWYInQxfPQ*E(OaJ8qD)tEA;ude*jWduB2` z4BLmL|IV*v@BcrP2gULwrV2={%kObbT{Q_ddcJ+}x~H}4ViTNAB0$B7X{ASFIU0DV zv!eou!aRPL^8IkHVr^C6T+xaQihl!gv5vkf(nyirK`=J~;W&O{Sn$vki8d=0oxx*y z?JlI=?~@ZR+e#n0rSe-|d-BlL82|H=?PqM5iZ?0`=huK;tD^@B;C%sQ)YbL}%AEE? z`&`_iKnPz8V#;eyaO^G~?Z7p?KJft7TwF@yXsk-P6cvW%U*p3=SyjgB0l&C?;q+ID zPag;wr+L-k28dfhVuSR^i*OKlR1VBX3$CfBwctB{_32?z0ncvlh)a`=@ksiAXMin( z$8iiWZ;7m0Is)y%M$(~7y9#ig&A7Ym zu{r|4w{q@rF=QnPmi-;GjfYV?T|udG zzj&jIw|x$iSi(~wTD=w7lw>)N$>a2aAb8g9ySSH(HThzvu#?YjL*o?b#Ou|eJUG2I zb=3e|V)K+Wfk&Iid zqrx&JHHo_1-H9;?&3nCi74-N?H0lE)rMN|Bep6eL#=_VO_}DMnN;A{5A_t|7h{zEd zJP3vHx;XOJ@pV^#FshlLO4(DKa`8H%sST?w8WZ$;SL}5?7XKXV(dh2g>!QlcSxEZV z$g9jscCC-OlJ<~-TtF`^Cnxmu2ybf?y>gn#E)gtIe0}ZZ#0K;q!W8bA*K5y*ai5bH zkB#pW?WZHbz-S}r>?G%-jorUyhxu5=Nu;anQ`{F719G{XEG#h@g3CI5eEZVnf}>G> zb1C0~U1=%OBvsoE$rVN9t1zy0w6}GA5M&zm1U}0o>) zOfB_{AW0wl3|(0FpN#E>R5~gc=2iY?{A|uAgkau`U^iR7^vA?6H=4W4Qqma$gk3JP6?sY`xoGqfXSFt# zio4B;&Dn!7R8Dd4r0nU%*1p-)fBTBz9p5$01n2y-BzD%6)Je%V&)6__0BR>?KznLR zTwhBc15GgL5E{jk!%12uo#S?5p2NaUhEtCXgk~BQRMd;xxH`_8&%YKFLiF@!yhY>h z$yj@V?D3cRJ7jsFLRJIWX4l)5k{gJ2-hEcCVmqm+8HVN(o$+R`2+9OyAXL+C>y774 z*aNS>5g>=xfp1j9GQ3#0&01KLz6JI9EoGxrQ8`XT{=$yZS?%1!yWG~OYDoI{J_X9*31iEk_GBkX8wGbL zq?I8TJtt~Pl5gCaXRax3F1a7l#uCS+RHMQUQa|-0<`uMGJtiP#Qcx6Eu{?LSn`7o8 zy6^Km)l62zP))JK%e~x%=oawKBg=-4rfpzQ3`jRqW1v!@BuQ5B0TUU@`Tq!y)Cd+L8rj%Rdtv$`7y3xA_5Fsjk-vuO_@3}(X5rGZ_1h2v$9N^;+_0h4rA#MI)Uz3` zEA%>GhrSt|y{*P09mGHN$_ym(pPiC+Aky}6}g>l@dcl4>+^ z=-&F#{s333#o0H!AA<~vX1^a%9bW9kHDf?5&yISj`mCh+X573-OK_Tg_JukEFYNDn z?>RA`Id==%tn?DzcPXBT6n>u7W~-aX^tPqUKx#S@NR#Om9N4GR ziQYUMI?$rt-8$)AwLY*rqk8Gxe!I%Et}tpSQSuoNe0i1w$Ytx;$?JQ=MCC@T$ncZ) zAi3?F^pmmN@A6648)9?}H8IrgXlEHZ={w1`;r~JLBzHgd>8Ncn(}kn)a|i44zqA1D zgWd}yzV_&6Yjbczpm?+wTTJ^-`PO!5-{JN(=wx4~A+f5G=Dl}Rzu2hIZ@b07vUcMV zCV$*G`PR6(E*G;fVf1^rlYoPkjQ0FyTn#o~&@-`(pq}hP*3@n<-0B}&7^8=7+|tL4 zkdq`&?i#Yb`?FGj42grCprvl??ycY&t{2%*-aj95iG!SimX6BIhlf?E^X*h$Z~v)* z2Il@8w%jZ zKceNMA#4ZR5MS<((|K4kJ>5&vB{+XjFGtu#HOQtTX3m&Xq$KyLIFx7TKA{JYNjJgS zsId1A3Pq#*`=(wZ@<+mgQc9zzX3&rF%F@4y<@Pw^;jt?`!Cvv9N?fM+M8WS@`=G-J^5dzV=!&_wTo9F#xn3M@|;F5?;gk8?T~4 zgiAnCtp5qBmCrc^`u&cBC763K z2JISx@Joec0Vbtau=oDD@YeO zFo#(&27fWR@uJGEIQ6Z7%}l}$Bsi;|B?P`!;g28T=T(2#r&h52gui?BvCnH<#y!x5 zWbWSxFK3r((X3D&Pp+fa-c~Ib2nw$M!7T>8u-z7{=+ui#S``6ZrPVojUp?P33+Z z%}b&&+EzOj5Hr*S5yrX>Ojhbk8JM~3J9BaP*rbKJNQHJhjN=_~T-8F)#6wCpqMeuB z0JigzPM9?Mr6Zutjfko%9>j#TM_sgso_~DabbO_W6QFFgW*(tiQdc1G&ie7NUJ$yV z@yresD*eV9qCq6EH(seMk?}&l05aRKUVfcJ2@fwC`cQJ)PhP9+{-2zP|wE80X7}7nBKF%PL|F%_C(( z!Lxx`@3(jgv@so+DSjO^ZL0rQBK(VWv?wfL$>OA3;;sM}EvYps=>vCd6&3?bsmzpx z_gPBd07~L+PVy_HvNSa98z}3mZ3MD@FSr9uPGIb0Z+S>vWnWy7@CzY-J(4-WVmSb{ z19WKhyFH#K9J2!XIP#fR3~N=8o5t7uadXi^LPgzu-?j9C$(DnGSI3SnMz^Pqg# z)6X1!4}3n5^>F`ybyp_?WNRKT>;y@;$yiyVnsyjK?;#OlLt<*T%9-_9@)N@9F=V$Tr$U671t#*k~wmPYaB zmn)U*JIpQ4u}FG^NG??P#cjUdS5D&Zg9-+e401AEv-PP}xmwTUY_79f{GCJC1AtaG zo;WfYT_F2dO9z4mJL>vA0Z(AU#)X~gLFF`-R=r&z#e--sI5&sMBf=J9QcI4WopMGH z9vs5>uxGwi6i2lj-ByzbebC+X<3oq1yA)g`R{N8K5~t#LUHNmzy;k~@?=n~r6p!kX zQw(no3wGjWb$E?x_ZYw#wsN3VyGwo3#}Voh)y~`3UR}v<6`TbSxm@&J$K}~RWH2O{MLqJe~9cV(h zJuGOa(w}X+pB5~RR$)_k_jE&|9wq=UvT}}MxfPOYsu>%Ko}N;+6AdGZh6sCo@I6f zC#k%O@N@Z4JFCk^psv_g>J&ynfi-szD~Gh_F4uVOS+G+iWV;ErIB$(3azU$=M6*C1 zMH{0VoP@O(bT2{+?}b9A1}w|mK%}kCvWlWY8FPO>^x6$sj>-kz-^a<~Sp=AOmqxIt zBOIFE6G!6Qp);jeN!KUwe)oTLtx?=gPR+=nw#PiB`DS1!omftuSU5ufe|HzI>C}^R z(JV}V!TownK(?x56W=$ErbrWawRgYid{Z}Fz{Y-h3L|(ni$nkbLcJe>efT6M&zo3) z8=Mn)mMF>rpaF_<$Rg!q$#?EVQUdQ`Bi49c3)0hJM8_}r7xycfwa1RL3{t-LuoOGB zQS`wh27J+NDcbfDsHH$o*d}PMuFof|ektQZ__;Pk4z4v$=h`GkPtRqoOZSq_CGJf) zj#lVbYjzDn4!+53&tX)EXWN zeaSW)Gmo^XR~%Qd5%a9w-BNMh&X@|~z35SWYsRck`1!z7BF zf3o77!E}=PLM9V9(|H-9$Xi<|#r|RmytV&g39cwZZ?5)^7n8@=%rqo@_3Cxt2%ntZ zig1RqnmcrX_?;n!OI5qwpCkLNROxT94|q%|x5Y%aYvLxI$NEJq=-ofn zs+hy9m|9G~tzb9P{M>{5@wnKJhtus6Z%I`M;_>ovy#26h<*MColigZ56CHBDRPFG4 z#yW>J!6Gs(pbw`H`#IT5>Qv2TmA-pZt|=fPIn@aZBdrAlMJKMjQ4m5-D`zhM&3UPn zFK_rYJ%K*i;F>63_`y+KT6G~^c{h}nk&X~sD76B zzPy<|$QJA9^){1Vnc9DUxpSGN0Xw1+PMT|#EDvHxeWVrFRN51>#TnqvXr<1cs7i$w z!)xUGvJ@cgSm@UpBQ}_>FJWT~XyauBZ8SV=izY{t|EVX_-mHG$eYGTx!M@`8=}xWn zRZK+2^mZYrB9NvFF0U_|+1y&g1n|)(QA?C=BK+c%LE;obGcL!1-7YA~bYJ89Da>o` z4l^vLJb7aj-ky+|4!HX14{Ag`;n3JL{K~OFu2qXQCM{t6hUW|FfvlnAoC~dsLhmF$ zXLx+R$dPyR=bDAs1!WbpX}Vn73{+@*;Zz?#K7%SX>)!F7pD?{2B4#ulLIYuDBTykq z5$Le%#*}{^!M_A)GuKU(=YnkPweYZ!mV9wb$3#Tz6tG^w7NjJ)o5>)*IFMGG)4{|3 z#49n4$MSw||N3zCX&jCMAMF{Z|{LDV!1Lt)JcYRGlL|koWp$EWiWM?&gwfdE2M zwc&PXe2(zrE06J)o_lLPta{<_7qzk15iA#Z7GnXUL&pU;4 zY6cmLcAepI+;zg!8*)_b1MtBwcMqW(lA%y8%XxSag@k1lm!;13zL&HP>RP=rz_!C& zAnPQeY9Tqg#YDr$oq`#5tsgdvg6um&emR>cn^Xq0dO;7VWe`b)GAt&}XQu84)NS8? zgU~Y%P4=g~=ub$QkgX#%_m34hNAfLgb?T>t?S9=PyRk?A~6Ot(e;Rww;@h5}ftK%>9&&x%3Dq$we0Z^RG*jR`t9u_yvFa`S(#%uQ^ zhdl*0(q%I$@%))f@Qr{biseDD8JP8kg7MPy zGeOTG#CmJJz1g_xzm@SgU>rwm?A&9d;#z0>w+A)-njLe7&_owzDPciTM`u5iJ1>5g zY`pCX>#KW4RV`hPlkzto&mIZ4s)=`#Xys%VtBYnD#?wIoQj)KMLPB>ML6$)#X4LY6 zLvhZ3Qg`Bf@Q<}ltWLaiSC8(=hU({9iI)GfD6zVe@(MCgc zl&aw`4u|s$io+p~TujHI9=uigz&kKCqwNuWYHl%0FMA)9mA{!4U^OQO^~neoPVQW# zau*{>&H15iTt*_(c&bv{IK+lho$QKj=3&!>I`uKB6NsBt-SPqyc<9ieXQ8`>*O7a+ z3TVcW9jqH?hoN8JT(H04nv(SxZT6Ziq_C~k!DW?BJQ3j@ZW;+W#?$e*r#5AMq>Ko@ zk+wMhEV)?_Kzb{pct;4i-@?pc6@e;;y$J%9ObwKFNneV07^Cw-`}^Ml5nT-wx7bl2 z1i(#yF2EckxGm(3?!X2p%7O9Q)Vh647Z8 z{F>dbhoe$DI~OHBEwDv9`u%Q{&@%M@Vp)hN|EZYxzoc*s%)frtiDSt9f5_})uKt4M zf9W4U`EUoNPdG<#$zFUbDab3`cp8kS2HNNdBHZVg}sw=D%wI9yeJYk%!MWqQ?s zIlHzfW|*?PKl$sY;{~~!glYej+;F1Uhr0wv1r_!*{L$pE^Y0{Y@$qNBe~W7>dAmr| z1;94cif=z0Dh?02h&yRTO3)@`E`K2VE((gv#GJ0ojjN&k!)O1{Q-~&FoFyXJNSt7cRE25{aNJG+|VkNW~FOa3}m4 zhcfe(M*`69ie?3>-hh0kca_+P2)JTk2MoO?ti@f~D%)e?uNSiM z(F+BWzwT!|)#8l3NLEJ=wHt~gO>4rL@66t z?^*8j0<)h2T`ZLLu0}|p%rAJ>2=CH0!qXrt9BM)JIA|#n^v~GD`k@D!l}W$AiZ3oY zjQ83TfBH(;>AdhuS{*6!+RABHXw3`pT3@TR5P4IQ6%Wim&=sHnk-siPaPtzv;mu=Q zYQpKNM7K1aN9Cz!CNkn*y*baCj6lh`7BBA`F7ed!7^!xCg*0>eLon2U?JjUd%3EES z*2cO~=r}Jmc^42GY#nS21#>6yyt+SxTxSjS5CLt7yl-J9T3~w8a_9e!{Ec_Tt~%f@krT zp2I!^kWTyWuzO+y))6Wrn*F-`#YYTI<;^LE0E~@@K>MxXq^x?6b0ST@`lH~)O7gSW z(cTd63e{T^6`&Ek;*kzVFg4zm-HLflW0}JHjM@{$?A`a(`8X>0cpdx=d*dM60ay;+ zBmiScFQ)}y0&Wony`UlTo&FW^5~m}=pNZmN-U2{8uUNw~mmC zfkp>JQP&o)SoZQ&yyKU#xcjp^A9WRW3)@l^{BrBdJ8X~BE1+FYqkzQv@%W||EOHm_ zRB#_xU$M*7-l~rulUBbhBRwozi!*Cg!TP~jGFP>U?up6?axH!u0Q5JD%q2cxO_5hIj~R2*e;=e z@BFBJ?g;Yp-K|5R(X-GHHLxk*?-mTu-aG?ej!*DrkNfFck{{IHisW{kxhWFmE$f+~ z*7$CO#a-%svDZV6tck(?vSScV5(17cg#NPuSuD9EFmv7Pe^+Jz76(H|1L{Y09c2cg z3}NK5wyO@ovo!MJUp%Lia>8wP_0xvsV`ce1;PW7q`zRAyGNq>*8n@CLvuif>jd1el z3BC=UQYr;4!S0hJtkjI#kv73pimeB+#6g?+VRn??hdlO}yMiCUGC3(^l#}6-{OKt; z0ewbMly!-p;SESrT{WW|wUZ`D&)RmtjSyZ}0j=uFDc{;vrTG~ME<6Z_D{<3cA%G@v z2kV-P+C&a8je0(c+8H!O2EIxd>--^TCsa{^s0=$9T~1oQwj zhQ(+gP|>?0-ZeBb3eC0_XV6$v?KC9oS66SoCeK&6Z`}eWP6$~*1VA)k2+pZAVeWn} zqo_b+G9sz{EEFXO`vdY?ZaC$RdUk<*ZHBI{lw2iG1NMHUQ1 z+JvXsY9kkPj$muetc^4}=^ug7b)&~?&!T1AP0_IdISC334+VcWDTNKtum8frvzWap zwT7__7e!4#UpT`eZqMFzPiS%sm$e=&M$mTLJ;^k(+gO?}#lmWkog~FjAe>dE#vcRV zw_*`<-dwi?bWuOI3u#P_W!WF~!&g#ltNR#u9hew$^8<~8Ja_p)LO{>@!ex20ZqKKE zu;6li#Y?XPy<8U(PRZzJGjNZpb7y5?%hD6->%9q{oBMeGhfN45c5n%>X&rAc3*r~D zc{5zJ0V;1Jkb=}&yW_m#M?7qx^5VE@VuhObpOVQzHwQEhx`V#&8jfVPuBMHh5@&Uw zZGEN%hX;^K!ndOJ18t|z$SGGEPMR-20Wfy4x^wEjlyi&h)J^A5AqDiXFKANmk}NnP zi0yZTy##mUVw29}cxmBc>qQ$TM1aWUo(y_waYpECswPNCzl5kwADI3$dP)C8hciK5#kcIi@_*PI_ZGlm)R<@iMP7vqqKYC?6g4It{tuh92Zw_h94HUb4If z2TATlt3XiBT<9y+e%S=R#x7jOFcC0teKIO4CnJr~J^8yS0@=X*#V^glu3zSj-Uz8+ z%&@2j-B1~@texa**h>Sot0x|Di!-VAxOm|LWPA**~MsX~PJOX>g}GsZ9tDhx_bW(l+*GLK1dQ=5KL^c8HhS z#8pmb+1I(ryiehTQl`A1BTkGGJtYTLE0mBAL@iJ%sEpSEeg<{7d{vb9X|@(;t8_tM zTeQboy;o>J74@lZdKUa+E>~{Bo~ol~b{NULc~4axz;7M>>g~qd{QG@lN#~W$Tr+l; zhEfMWYQ@xD&{ZTjt_g2fdhMY#C{Oyi74qZ!)$aVceL&*~Fbhhvh50_PS^%|;`|z8o zdi+{3I!nWA%uTn|L{ZL)itQXQfL7>af<6e^Kk$pliAgAqVdt(WQtdNzz^dRA#0NnB z2p%on*e97RsMb`*x;pR(PMpQ5vb1l0X#v`};!2feD)Ynec%J|B;grK@$(~ zK{d$Z9ZukW1hWUDqwD0y5EHaIdL;o9Wt3qMx}7>+Eqo>9)S;k+Lm3#aDZ|yZaP@Cv ziBY)*DskF1cgw%}p(qoH09Z<#E1$#QZv|p4P3Qnm- z09rR5ehp=}O`+`B9Oz&ktKiwW$tY;Khr`&rr~Sf96BbBc(1j~_O1bVRKI_vWe|#zk zBlL5UdR+QNeyPRbWhsV5iCkzw*02NHapUrqS1i_Ze*J)*nfyq_TA4re#{NytB)T^k znw_g?zUa{ahU8dSv4|+N!b|{R9dvwe2E39a`3(tAU>rchy}9)6>%Du<591A)7^EIB z-P{OY1$|9}j{KQ!GeagZg3iBXV&E%5!ML)E=!FP`WOp5uSk<)kb`wXL9bQnluGAh~ z@yGgGW>ts;G>Mx`RQig6(fWC>CQ^%+L(!Xy5HK~IAdmm2u(trKYTN$56#?l+T1f#Z zrMm39dmy`KC3-TR*7a~=+E_Fk+x z=ZMevj-XDzOpo|lb_eZ#YhiV}iq z)W&*G=gSlDcBF-CanWSZlmYz?IPG3^S3986pKN!3Y$MovPBKz^>J6Q_&yf}3ugGJ_l z8P`MGKD zlC_qf<_yQ1?tBn`a}ow^N$hXh=|e;xrr1$^dDG0&q|u3&t^wY-l;(Q zFSP-(w12k9GOp;UsdE<9FR4EPkr`f}ro$IA2OpdsmQxY{MGgepf{}K&8OPQKQGFTz z+V_o%7f(XSWXeTwC}utDXj?Opc|xGrebtffw5_AyT~m zN9octOE7z;!_zevpifZuT;cE~fAJX?sqYt0@*u*g3dw7rfXas<0Ucx0P2?m7Mu_3D zVr+G*Ndsrn#A$E?9F&%^mq+||OF(+ixf$0B|H|NHEXlcl>I=Ba$^+ka!B~coet->- zUEN#%z8LFK&S;}0N);CY-vdeByHdf^&MAxHsp}zxHoBKU$J(CaWTBJ;qdg3lLDfxy zE37AFAEqq(f`49Li_ZMAr&pE^R~((-mB{Y1rbx4<1rk-k@c7gqW1v_v`fdO<;)%Hp zPV;w#hiDv`q;4`%l=AN2WMPA-!oFBOn0&urLDx)V-t>c;y;A0S7@3Tt1nQj%n9>2M z^+trGifw<9k`lg`lfaXwcZJ>&GPS3>kW=2refIiFlJt|96=aNV8Tyk;dtZQ(LqhB` zMZU)xlXfQ~{=5>N1)9C^JO6IW=+7Zn?_bGVdY6&|j6t7^-;cfvD)RHaFf2iW3@`V` zWdks*BzW=(bA!v?@{(kGGLq)}(S>Js;w8wq7G(Mi1F>(sHk3z2iT?*`n7q~SAx?K@ zqZb^mA( zyfPo>DO2bqsmww-UobLeUjw68N`yuV9l9Jt4P=9)BIovFkFP>J&kJ;co`FTIE9NmO z6bNAFT8d=}w08d8&|nU7iPMIL!NtC&Vrn!Jr_2Yv3Tf&?5*rXTqs)0*r_jXid+X{Z`M)C~QJQbHtoS_I9)=Ghl4nlo8R4g3`-N^~gFh}-_ zVe60~)Rv%jzUPP*!WK%*eboM*7BUe;w z^fa!6L!8BXO7k8xGa7olh}PTB#@?g?@qskit^#6od8az?fXkYt3)G3klLJ*JH`MtX zznrlFkn4yqT%@GP=JH<^A?)m&1vnbe$(5ir{`ujeWjh&0H@g6@J27c$@W2e*i=#_e zxr)~qd#TmjWcD$kTYBNdo>$G1|6zc>ps}|vF6sp-NfE~4)stfDt&pX0{BzL~(EDIc zRnN^Uu$ox)ncpqbrsvcCtM8)gve1+PP_5t-B(N&Sc2u!i4>~Jh_zL8wkhbPh$OS&v}6=<%KD#};?&yEVf?U1s0 zWxqX{DfS(@txhbfy5y5|6I+V5Th2o$+aA)Ek$9|};c^F4Zbs50Dsl8<{Zn1nk{U+pU> z_yRR#t1@QeB1cYh3vA4U{i}^o*kbT8%TK8*K zyxm7Pz0_0U13iV2Cw&VAy$tI8ob6x0s)L!mr(L+bYx>;t<9t3N{I{+a_mrn^M_JHcUN=0l1ZH5_@>L$6=6(a0 zQ=;@v3T*F6KeD%g4ckY5z!lfgLj40i7IOECCzhKN{D4NQvoArTE@bAQLx-}k{%1W< z?nmCC)h#0X6l#u<$Nv3RS(jOq2HfJSi8!fC8k^8jo}Ny| zpw0frte}^ga;>IoepS%%byvvPcBG$`E5V>1WKj@v@38{Ho3kp#wbfKNC)4Tu zrR|7f#pxW~Y3#wi=Q_+~-E4Y_lguN~8wP!{Xkf;WK8&f@+49%zSyc^D#AO?iJ7&0Yf@kEiu-khBTy=>U>W z=7sfM(?(Wt8Ur6J1qByQr}aqxss1*=+E{)nJCKwcyy%Bl-W&xP=O$#W#CLbVrUC?P zkC2p>>raRu&dLzfy&)AA#5_2$g;3$V9`!Zh5zWD^s9^j@P6D#$D4)RyjPJzHEH<2R zJ@Q~Hf{doq9a}a}klB2mOaU1+;)n-p4I(ZWK=3ZJ_atVk85;xWdB93_;;|zd0DC3G?}a$vAAaz85B<>NXqL` zA>q-OAwDa+Y6F8pum@Bv`H9_)v{zUS1v7ApeW4TXcVCP*m7FCVWb_5BMaO)Ho0u&Q zfV83~8y{%f^wBlL^b0sW3b(9RlUoqsYpjl4LF7^PAMhIg;hKj1UCpbQZ)R> zPEu3kdBWo3@eCng8f%&{2%H6&Q%9Rdbs36T3P}>6vWBTiJ>MA=*jT#W5~}?sVHbQ9 zsJ~u+dv?-GYGFQ`JMlb)1!OEyF433rB4dDNaMB7oK}TiEdc;hAd>b+{{AISSBKJ;G z!;_QrC98LSGKzVhuI*;QU1h52K8k>WOqF(RM-sc;&yy*&+9IaWu6Yk11J9Wh?6IYc zPf!^zI~Txa@#!g3eXgsuVpGTamgq$rQJOn%1wKrd<2JZ53XL*7U;1uS6L>mRsWD5! z&}qXbAMH~J4^{fR9n~rjiwP#Y?d4Kej}T?I%(j=j;|3+WDtWJ#v#_5*)n*rxb$;LW zW_&@@#ABz?)C3_Y=lv{u1g?A+Vs2=5BS&K6sL*Y)bfgAUxy3XWaLCXt_IErSgBoz)>zA=00 zaFqg<6!G8`Wpi&}Ze(ekGYJ!f*(^|1CG$@M_5($c@YPn5cpeXYj2px|4W*OIV{D%6 zv}1F0aeg^`#1l*`PSiyXUGiED_bx+L0Uea>_!Jm)cu}8kP%=I_UDY>k12|X;lh53L zYH_xvC`B1z)&rt0_X%{uv?n7^=^P%WV}&25I=!5Od>|@tyFMC!;HZL(Yj;`{$sJf2 zC2KJYI5ZDrH-{7b;$K z!n+l&OyoSfIH()`U$zRtk>h^U;`hGL}!-sXma4VCy0N9^WTuq_80z zz3_8k+b{$CkG)x9B7DTtcx4yQ%J!?4vi=-al$YAtc)Rcr`+OxD&aX90$2?~v1Oa9=bH4P{NNt zf2bMIoySKUfC^g~{y-gmQlj|7GEtE47fccKNk$2}<_4WC_Bd)O2 z7s^2#tagZ+p-mC$Junte4U_6Rj)G;&P1tu}?7q6#x!OicWYObxoj(aXJ+WxA8M?K0 z!r=R%pV8DynQdWYCH|e?(Qq+QAT=K5#^TPGWHq?{QUSixuw5muw$G8;nP8;@jY3Lo zj_Caen3kOnJu^J9U||ie!+a5tXDBaU1t|1rsAfx1*7a1X*a$HGT^ySO#PRnsjngR4 z$7CPvHag>-U>31k)4XPjlo+lvE&^Cv;ln3(fmU^6;j;bI9qw#nhs&^YJxDsbQ4RE&oV z&jycN(kKBvd`ch^?P2Fq~WJ+H(Zq2=D(=9JGNd7oKfDo5v-O6m!d$#S;SrQMV& zR}zVm`d^}_`wDG2zi{eU9gGM88B}^~p#V+YJhVs8yqACC-A?zF7L&1;fq_YAP<08F zY5lT9M#OkOH~*K>K>&CS#2X318@KhzH;aSu|Dgncu>Gz z(Oo7SA!?$>R39s99}o*HM~pLplA+5|il+q4P03>2e&-^p;m+doNFJ;h)w42sb+#HayhIudeo z&qm+A{)}5S{)t!)2es}oc2Eo|KS8eoMA>WB-<*|goV)15A<}{ILI%Z2x#aiG;k}}cRIa0QpY-Ps6q;PYxP}%U0elabc)0bC;q!>-LfVC;dGPFFd8c#(E^y z3i3W)16G%g$db>ra`=+|07Hkt9J>8}b<%kw&v544VdP9RmlvEm{Cl<;3G)?7c=-Hq)&n-)>wdp` z;u_w|rl>$y+#Z_WM?Tzgu7^sI4n0J8^AU$gje{U*d`wsojTt}5<^YCO4FG*A5r33g zeph+f=#vGqS(#WmDlH+;N?|lr*W&oh_f#d-7W$Kh|6>$L8`pSXz4ta?Kud||5r%{> zW1XAVXzba8jMD_Pe+&jLBaJfF-^viV){}|o9Joa`M^D% z&i7X*oy%?i8J#q=XF>bjp5Z+e7i&W&O#&xh1@r)ObViTP8?b`qz9Qe}BlfDC{=(O-%(Xao%8y;LJpGWhZ4zf@OJ6jk(m=j^ZQVlSUR4&>c>d5|=I zA?*f?Gjx*#eb>}C2G&J2hW288^Cg9?4O&A2DB4qjF7cWZDU8gJp(GLjiX1u?Z0Op@ zz43ZRq9zGQ`W1dG<_-{me|J-mPl?Wm4}5-gh8LoV&jSyw```Q9mgah&!iv9Z6ublG>V%U0T6iahMTNdE*a0=*iCE4?m!yb53dY<|RIMlfS( z?x>sKlmTkUooQjQ90RiW9KKUU9?)33@U5hVxEM{no&J+(58oWZHL+4?J4931_N#mC zH~?(}C#sHN=O^iTpA%*(s7tU^*|DUfDWCOamX^FNzb0eh=J#7Ywp(CG)Z_bQ9f(iV zTNCktbWjc|HN&)zu@K6xl4E1Z6UJVfIRL5c6>|Y@L#P2{>KdXN^7|_0*4oq;m7%pp zrsz|2aGw?0SoC#@GNPcTg8JyV$1VCYwQcmr*u7h888rzp9F+^{W5dsNRTPFkPHea( zTPXtlgV9DC95dPSN_b}Xms+ilESCGXe6Z9TBzqgw4{=kp2$_yv2Qn%KGCPEX>EfU7 zLbQ0l6qd2@@xy+9Vt58$(noqK4hNHE*;fOhbS3~bs_QGlU;%4=HO>=;hc!sQ^$CTp zxrzFL$46VqdOM3C20`O6{!uE&f`@uqO$!6T*;?o4qHk5<$yOh=pcZ_;J+>~bw(bB> zREP?uN6F*nPaDKOD~4WkUPI(0CQh!l*2}6cV~>-QQUy<|6h@DT0M$$T1rTa8f%TS- z>~yaMl`8mekdK0ZNF=PNyXeWchQZ#69#yj2TQ1IxbMDi|a9p6XswcuB;%5`#ubFZq zk6oB{W;e`ubg>*PJXVF=RwSTlrYtH9Vha#?2xz*l-G7;M5LncWF6}JAc9isIrra4M zju@K(x-gW1T`t2AyR}0V>*OcC81#SDg<+J_h5x07OV(n<`tk3=$Z27L`hV3diV{ai zV$PiXQxAdHi*o$0*Gm-0lPsd4{HtsWU3NmJqBIzwvg=t0NE9|8TQ)@s>OZR+r$e%! zBvA|#pE9kdb0~i?fBomGSQv*V+uc9dLf9q`P80`qf6z-wolW(%X+k0dXpJNbIizV> zq*Mv4%nNZjdSmeVLMhYl1*f}6-Ad^;?n-t8r{yA#6x`j!OacEYuCt#r_EyA z_VT;COEq#=#%kje$QvvNpH9>|9eGV2jn>}vu65S@`*9FHO&fFsflTwYYbMIFL9~im zH>$linl~2C#rXVsn8Ba_eu(qYK@de{Pu@X_S=BJACmxIcLtrA zBr-_=jjKNHn|KMUc-xw8#SW3p%SKN$8gyMmaEE~dYP@iy3iZPMR zgb8foV!lv(`~4iypI`V25nQ#ztm*Mc)EiY=;Xsl;!i%%}`)LXATQ+esHoZoDHKcn# zDiPCMwj@nhQ4hY%ti^MGFCMPo(8?$jd)J#56IJr>hc(c$qfjGMCmtDU-AeZt^SnJ;C`4Pm6k+@4()-)yJupZd?+G^@}Q!=#Cb5>>pvZbh*wyoV9yKklB9EU^!hc3(@QYB!Pn z-+y!Q4oWdBoJv` zmWsJ&JsS6}Yo_=?4BDLr@trs&z9ZQmEwI1yIYg?Oi3aj}3Z9cb+KFuBlq<~mP_i+1 zf2Pf;oI?)l?cdM#J>y%9@%ZjfooxTZK$%!lKCdH){n)RJ9V}j-zP&H;UIOi4nzD#x8hRFZETPXKw6a zH@}>%^?szp!201XJ-y%twFP||(VU&8cC6#eV*(`Y&0Vz6KUx6xd;i$-#qy{hbp6^! zNOaUVwxy+FHY`1?nG1WQ2wI5RLUdP8UdFw+rqFGM2#iC%uK3pzsjcxjg2nuQgD2#TMCFzcjVvZdiq!b;u_D zj)CC(L^SY&?Yo)gvg!Gv&kXJ%b@!7q$@u%XXS?3Z*XQR9zWfFkv^w%FBdcytRw`zn z)2p1#jH~y;4y%sZ{d^rimUo$RA2CYOD6OeG9I9)Tz2)_s8V>1RE8N1o&S=dLOvLVs zMDO_fc7{iQtt+GU>g3^`uHAif^42*ilA@>WL+@6;^Y<)Mjd0HVykcn)uMQ>o4-&eaLX8D?IapiV} zus-ZdEvJwCWxejlPa}{`0$#NTADGMH3XKdI*?sY74E_E%6gYsQ(kKr&q;}%M+ZRU{ zSz<{a*@RFaeNfr+MNWUZiFrNyvr1P%s!kG(hsjh&;{rl`q`_8rm(%nf=8$gxqi1Mp z7MJ$2w?%v}Iqi<)+guJ-mq!?Nu|$)WSA8*RL!hd0a}gn%g6S6jeohfG>CAhgd|k_D z3J#YuNZUgeS*+npM9R`jN&&nMF;819ra3%e%cy)WTXpnqS@nh8 zm5Nb!NCNNWX=B}nq~q{SFVobz)k-rsC;Nuz@!K%PzYX0DX`lO;sA?g>iY(@bkKWHe z3(k2Af4ckd6)BUMXYEK6Dp8G-RlapziX;&^Y{dU8NWv}BLWGf4wp2a*>Bcc@u)0!D z)y{Z91YsyYUZ(Nwju74AE?dXvu93y4ayV*#KR>8=3w0MMNsy_#`qpWZgz#Ik+lxdE zK7rIZj^;)3Pg65HwRp$7#BablP7&q|dXCpwAtc2YR6ejwI4|hk?zB%gqibX$kTIXy zFlfc+enEzbFp|0WN@BN=knC*ls+!I=cYx#6ew@aqm9CW z8}s+xzesmh18J1HMWKU2>HX-YgJarsdyb1Pv3_u{4sm1UhQOH>MEI22U1h*^|#CrF|NJVMsv}%F|K^)dAz@TYtyv6 zkphQQ^LFa-6OoybA%Q)?jri<2I45)0Z6o~cy*EWmryTy;wr2^hs=!b9y6Tu={`J}L z9HV01(-@)Vi_XP-x|9cGL?Ye&}2j#CoP+Rf8hZT1ecb@3~{nnk)|0PmjbK8y0 zRtdQB*Jb-=GOl5lp;~g7BNW!wyGV%!h$Gi8xYLQT_V|NPP`R)^ z?4_lshu^5BoWI-`(C&2V;2>&uRVJUXoBs9maTvmW5xlhTJ43TaRlzZ3el)Nqb0#Ji z1kPVV;wE(6xH{_KX;YA&QyYFbXnb(Tu+*p^BH{rW>f)NbYYEcwL*`eAHs^}E zHcpfuOPA}L`I#^Ox*~qKqUP=|5}Iz^1X3kwtwr7)8G=viBRB)g0^*bGqX|CK46stazS{6HAA=38S z@x-hVo|Le9i+0~iOz@s7vQPkQr{8+B3UllYp=qf^P}HsNbFNZE}C_lAsS>rDkS4m`_1i^KQK{4pKqYkA=ei zD)HUd?owvp{?mwQaJavJ;b(}FikdOOuZ+WN=?&NTeL~jcG5-G|va1a7waOfMmsVmStM9Eiq25jq($~doFM(-Sm}6kKXry z&#!wFlw3ETGOFE5tp61xDbyc~@GWzPv8ihfB!+}}#x_8V{l3>bi;NdX*^7&9kw+}n z>;|e@o`r;n%a4l+U%;Ez7e^+|W{qsVDF+i&#!xoJZcQgul~bLPLoO9VZpk3Wg{r1Y zYK<)}&DjB6w{U!(E(VpmD#S(lb^6M*{R#wGA3-3vURI+oM;Y}nZ+;dWfkQ>vPU6um zQ%rLDSGAlF7B?OdPpX~GNI42D$nmQB_C32y>is;f`)B?uMJ3B+`8hQzLkG+&^Slw; zbmu!6S%~Nu&+`zD)N4nu?ij~p7T!tSxxbrz5&cBU2wy;t(L_Ai-1fm6Gvd@g9^_|Y z{r?keK5%N646jJ#&5W>x$L9{G5h$+e8z03wJJjrJnvlyUkPq z^ulqryh~v!6UJR|vKdFK2aQ8Q;h32t>|3OC)GB`{2_y8{2uW?339E9zjrymFQdlxL zaV630au`foVAp?Z>_umu?dmzUFmvPI)q3QF#d%&T#^IHlvFwk~4>MNgl_r`QFoN}y z^N)6xrwX`mkeXlZc#<_bA#33zKu+&DU3s2!a**@=D1G$9yta9(NdXQv3$>%Gt_v)m zVe%6{%|yQgy-@a*%k6RTdeh=gT@rgfURdJZ2qSb7Qep@84>CqO_oTA)ttSWJtvB-y z*tg$+(={DU@>{-st2*w>Tu4TrbUd3>%t(v%gEze-EWdNjCI<${>t*aGrr01CQP9!4 zwLeeCb40f^u&r@oA-Wd74QHJ>HhOTIe=>uAY-;cT;xcAFMrp5S;rzwI`no-mOK56~ zVkff2Oewg!c#W)V7LSI*JyU;VD%~)aRcb#rZ*hJbpW(~4<8-{(C{I-=ucNlKPbwAj zRYRaXq{efSw|Z4tYv66Dx>QVJCPzt{P5$>Z1vIq{*b7I8cW!3Uc9Ac6Ipo#ty@UL0 zyZ1qc2|o)1qG>r`P))f$q$1@={ISJ6^^s(4+vOkQBpDR`=Uhh+J8>#z^m23=f!QLh zj?RKJR?9J@&-3_LH-esBKL8?A`ZASKxQ~u)QKQ^HqwA34?2{IB*&mN*2((c9wTH1O zlMg=g#ydz1C3~n)Dc;_GHpwqEt{+Lqan`x2nplpRoa=W4W28g)3*y`qFRc{rFpNj||7gDjVhJ<+&V3Z3p!|wQpCW|s9Jd{PoitC|j`X%@B;LK-((37^rDs&+_=HK@ zQeoJTJ~-;&U8D2S+pLUc!zGNIHnbB><3Qz3!;#cX=eRUnJSc;;cr>Z_>y&8=>mYaXIznnMDuemv% z(4|cUcD2FGRID5&gpF`;3au|uB;=K&UX~3s!K5Ez#8v7+@-<3saAq=H{VExARn?L( zvZRn~4Eaf!fg60+Olw7!Y$N!Bp_NcCi1-Qhf%LI)`;+hSI{S4p(E-JRwkWvQ5eK(S+R z7>{?B!%o9a$e4r=?SWN$c%^o4TQeF1E?5eulYJ!vcOV}79KH&XRtSRoU3ts(-)U-d zUy`3@@SiiwQ01tDiCSZ@$5Fb!2Oz(UXHh>{bA128dok7akP6a>31uFh7HdOec|;S9 z&C26G2T6+G5CKVbHjF7}MYzZyf4-wewoYwwWC>F$rcM2Y_;A46J{{l{F{ulqX*9kE zsu~VS$6wNS#v&+2JHsHw4ICf%?7pJ)u~}jXrA8M;K+}@r>~-U*?WJt@Tqf`jkC06d zH^f#)OmnOGn$Nde2A(^3Ev-K=hF2m5akdRt(n9qNRU?guW9aAJ2QaE*YGVRxM@1m= zV<{XVru5RVek~ofN0TCbyJLQl=##t4@R{a%SZMoliZg`2%u_*1KIr!yq=-44zk zi&^E!vL@VyX#5eRk==%So)jC`j5Lzh$R~|3RM3vW&Rt65gXuH@CK0J^kzM$<$~Z;h zq2U24&x&0T6E~y6irquzP~^EkAftHvfAUGI5}t9=C+(SQh2IQqA={=^{qm`;0 z(&rPlHI|%Eb{a!*%kjx%eSx((H=6%-MW@WTZ}U97WiP8%V97O^f}@ z;)A`$wh&6vooRy~<6rO)#`00)U*?p_4Y<~b{%i|Upw0v`xkDnaCj?|2!82a|b}L}; z{EG14jQR_Y3ol-%)vWec79ywRC`UT+7!Z>Dmow*vdW&D(kMiD?*$&SLWQ#iBkV1CS zm}yMh(3r=9B|RE@d{am5)!2K-*wRKx44V(=CdQam^%%+|C0Vm=ljB^+wrv`hHLeOq z6wy{SCN=j;BBvGNOm5xnz7XytWTVd5LNwqQzO-FDtL0>Qnmf@=*d|84X9+mqI7<~n ztB8Y_aS)0zZm8 zGZ!U7pT97qXR~Ny7UJHQW0#ut9S(t5C&*KSrn&=u;qT*XiEeka2DB~1IPdFQcquxW zJg|Rw@|LD2cjqHEWqz20Tg&xZiDIk_Yr*{h5>UdiCVnT)_5AXH7+_zN=&w9#uimb+ z22&$qH~YC}Mn9Uq`Vv=zKQbixHA|ESCxU3%ER=c#&5gWR^*@SnvgJX{OCv)>Cm4{QV|+i*)cEpl z{`U54(}i3<1>YMuO5(H}J9TzS2iF*HXHXz$gVl95!H74e-5XOj%$DG7Rg!YI*11!> zvs=rH;hA-1n8;4bo{9iB)KE1t)bG{oZqsx3P;@KREolb43@O7$$=lrqV9WqYZ0QyT zzTF}cwsz~8P-GkFvzx-#MHs1PM4ld*l4agKF%ccL1iWKcrNh9mKB|FJc4Tjv4YruO zcfwR$p{pQw=OoPHs*tvCQ8>&-oW4Nc( zFFO5UZv$-sO{6xC#0n4czP=}6LilEqK1|lf^b~|un;hKynmf-fFi=jDP?1bayM++? ze!`|S?hd%rDt)qRWWc*^#}zEUoyf~`*D-$mkA1i(1eJw_$ndkMf0qkr>QpHm?W4Fm_(e5~ zUg>|0{bIqAcL_;4h*STgy1XQ9MSNp;Z-+%ca*l`ciDOI|q)8Bi;PteH?!CYsbcp2L zZM7A+=wh8J(=E|W|LEcI8-laHSKz{9NphqSJn-_0n5jJY+38^k6Llaf^0zt0q=9d? zV_^5Q3-^5yI;DLO2>Rb%C1%>=6E|2hb>rmN#u{Ofl~=_*f4l~_LU~5L;X4#c0bpc) zAWvk|$DQMlDtWjBZ1&^BqVvqSm96p_eRc`qJ){&A31eQ1cf#Ki^%}D02hKjgypWr~ z%l>CmubE*&)n}j60~$l=f};l2R+2r83m{((#&{4x)+Fn{L=VE}9%JfYxU!h-%^7(!1(a6`iW2K=abW(g`JNw|(`TYv16a9Fn z{QBQv?v+5b1dJ}pMoe7Cyvz;9w3eHQbVH1^w?hQnXi%EHABCnC5>naoFlZ#tj7q>S zQohwrgZ*w782`d1gRmgfOx$~0t#sBOf@ds@dfIaY4}?F-MRh|t`9`^Ya949ElacdB z90eW?Wf7<5S-y3us%ZY*D+9kyNxn7|LLzkf6K5T2oi0HkuBann)?rn6cRz`&(4>0Y z98Khd4|a9`prQ8bZ%jIB!{5ZOv%#dHtBhp@y3W}Z z;Ul_PHK%e_f6WM~L+m5llu~4xZlCTdlL|}LKUl_~!g$lNA{Xzw96?J>$Vww}K zpYL&Pg?TDlbR^s%k$I9p&&YBc;o}j?hah0G!J-euwa_f%*u`)HWCL4cFZ&|o{2#@! z(qtN{dOgi1BVn^|2phIHH8q6d=s=X|+>h&6p9OE-k{EYhe+9S4>CztC^1Gq>bY8y-0(C~`DoDtCSPJC}(9>rH zsYsqQc}*9~#`n<%6S*rx+|=i6bMp3OBFFM+JlM@*jeN&eD%4vims34q96h5fw4w(! zuPX1a-LNV>Z8&jd2b`V>w`Vgu)c)8;23Ue#hg}Xh*sEWM0jIdiJpdWN^^c|C`9TZ- zw2SiH;82kMf&~>u+Kl+N1bFj#>C(NulWD5Pf+LT5@FnMNuaFMnzt6TQc@V>Qad#R{ zXURjhXVH3t3~YJeJWUa`9Ixm-H@n9gUa83GnEC&^Hm94;0f~taLNA00uOpHAQmI-)+gsjYl}y9i{ek~uhJEa=u~7I z9T4DJW|5WI%j1rdyd~OgZ5;h7ZlZH1BFbRYVOL23xZ-jGQ7ez-+dZN2Q%!!F z!_xd53R$IFqJOL$K@?O+BRUt+@8qtCs&tg3_20k`z{z{X$Tsm8@W|#E*6jx{Mi8zx zu$GvgLzU!)OnQW@XbY)%UGfy<3NujXgiyYh5>lfTp5)%x<^Zq`!?xn!AUwavM_$9I zX2HjkwE$jOk|wSxl4B{Ul{{!5@wNY@Nd|_v$!jt^{XJ)ZPmWTd>UL5(G_wQCX4qrp z++Bm9!B{H^t>xUacXitkqkNxZG=Ob=5J6eR+BRmEV5?4*2ip}Yf4xHy4p55=W%uQL z^9#%1hZ!-d6J$THdYVy>=-FZ2Zb2N&f76U)X}-+L}aVgLy(s|IV*GWIRE{({L*s_gsx zdQ(2ry!hs|J&%FY3Mn7Eq_O6}U&zCn`i6qh#GQ#ppC$HE-W-+EQ2xdhW37HH){_EC zs}2MyY$lL(D-ns91GKD0eb>?nw*cK2pn@&5?EfPL{bvYZ#6&!$?Wbv30LcO% zOdU0v%+oKS@=og|zkWR_-+BrY|OY5DZnGmZ%DDOb%TX7I|-D6nIWku+qjM zDB-4VYs_@R4|&uH9Og>jg)8QFRR0*|(=u{iZMS7?V@=6QV=SS@cAmVuCg?;=WS7O9 zEeCa9Reb)7f0)m)SU)G#tiT+I8l3mu{;2k=^qx`A&s6Jxr{aBzSXM9-kU!9T z1>@cCnj@i74eF5FvY*VJ(JGdt1zKZ*|Z3xBy8kkb{m0~tA1P}O;C3b(qA2uQ= zldo{kYSV?D+iPdjJCc;|%1Sm>xdKZClaX03E50(^r7=@&%ru!5{2>F&vaz|s9(n4* zHncc3Ir?%>%xdL9egYX*a*~~A_r1obMu}Dutwii6M4U93pI=LNY7z!8Rh8f#%2kvjQwB5AK{`m^7DsQGb;$WkCz8QCz3+-x&oJ&$4qH){KBoo*CW`(9}yi1;ZWP^1C7R(p5HG; zL@M$>6%orQ9ORi(x+pxXRVg_*>C!i&q7Tt!+t48)1kaHay*j#S!Lj|OZm|IhjUWZ+ z(}kz4+I}0}1!VNJzE`cT7Rq4&mNWW3Z21*P5imK0r0stJZ6~;NtQ|9TxzYV`-4f=uyl^j^XhiI!2a)#3ai_eRdB`Np2gbNGTdRXqmLa zKv+}p^{@$?55hqRcL`ytTT_*UUM@|seN^vb> z@#|$%o{%^Cbtnn`6a(M?y^*()YRmUV?|>BS)23JA_E>f$$3;n*Uum~X{vZ9u7g5k} zZ2OB}FQH{KnD7MA*wEJDuDKRO8QTW?S_jx?{8-9g)~(PSgdVGd_!FG#mce02tGS4) z!GXv5Z-{pX*E8! zKy?!6SPJ9}(DaYvNplVEOA# zOT@~!jMrsIEwGJk$(=nf2NT_9R1o+^VWOU)CCc%U#>PrjhHm=Vb+5#*Ys=O23Rv5T zH*a#KmZpJV2m}j|F{iv#bDX+R-ZWKAJ7Iis_8QOZh^-h=^CF9_(1rjvF{tmJYc@Em zrP2xhv=&WV{(!@Z^J^$65=!{#qUl(<`I&_K){Sp@wDCjgM`60a1QJ@{IuR-dK!V88 zvSSliud2VOPW%*z9YW8_1q^QDt> zfAG22Xn#S5(OvnNmit-X(@ddvb4GIKY7;h@&9{aakUk%_HTyd6jU_Y(@c*i@bow?; zn_?o*@tK`EMBiCCA8kJ~M5_Zay4dd*B%fjTEjv7qiFI11@K7!z_0Y$4mi&t5fWwmuG~3z+dHCXE#-97j@Gyu66R@t-H?0#gB#tey7>KiH;PoT|AI`# Date: Mon, 16 Oct 2023 09:56:48 -0500 Subject: [PATCH 02/53] update theme --- themes/risotto | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/themes/risotto b/themes/risotto index 21fdc87..4343550 160000 --- a/themes/risotto +++ b/themes/risotto @@ -1 +1 @@ -Subproject commit 21fdc87b56e662133c9bba6ee96940ed8f5be6a6 +Subproject commit 4343550d785d8cce942ac5109aa9fdd9d9a70823 From 3b54790c75e453503c523572f27e4e5fa4804cc5 Mon Sep 17 00:00:00 2001 From: John Bowdre Date: Mon, 16 Oct 2023 11:58:46 -0500 Subject: [PATCH 03/53] support displaying prompt symbol on command blocks --- config/_default/markup.toml | 6 +- .../render-codeblock-command-session.html | 3 + .../_markup/render-codeblock-command.html | 3 + .../render-codeblock-commandroot-session.html | 3 + .../_markup/render-codeblock-commandroot.html | 3 + static/css/custom.css | 31 +++++++ static/css/palettes/runtimeterror.css | 4 +- static/css/syntax.css | 86 +++++++++++++++++++ 8 files changed, 134 insertions(+), 5 deletions(-) create mode 100644 layouts/_default/_markup/render-codeblock-command-session.html create mode 100644 layouts/_default/_markup/render-codeblock-command.html create mode 100644 layouts/_default/_markup/render-codeblock-commandroot-session.html create mode 100644 layouts/_default/_markup/render-codeblock-commandroot.html create mode 100644 static/css/syntax.css diff --git a/config/_default/markup.toml b/config/_default/markup.toml index ad52c8e..81fc728 100644 --- a/config/_default/markup.toml +++ b/config/_default/markup.toml @@ -11,12 +11,12 @@ codeFences = true guessSyntax = true hl_Lines = '' - lineAnchors = '' - lineNos = true + lineNos = false lineNoStart = 1 lineNumbersInTable = false - noClasses = true + noClasses = false tabwidth = 2 + style = 'monokai' # Table of contents # Add toc = true to content front matter to enable diff --git a/layouts/_default/_markup/render-codeblock-command-session.html b/layouts/_default/_markup/render-codeblock-command-session.html new file mode 100644 index 0000000..6386861 --- /dev/null +++ b/layouts/_default/_markup/render-codeblock-command-session.html @@ -0,0 +1,3 @@ +
+ {{- highlight .Inner "bash" -}} +
diff --git a/layouts/_default/_markup/render-codeblock-command.html b/layouts/_default/_markup/render-codeblock-command.html new file mode 100644 index 0000000..6dd6714 --- /dev/null +++ b/layouts/_default/_markup/render-codeblock-command.html @@ -0,0 +1,3 @@ +
+ {{- highlight .Inner "bash" -}} +
diff --git a/layouts/_default/_markup/render-codeblock-commandroot-session.html b/layouts/_default/_markup/render-codeblock-commandroot-session.html new file mode 100644 index 0000000..f148561 --- /dev/null +++ b/layouts/_default/_markup/render-codeblock-commandroot-session.html @@ -0,0 +1,3 @@ +
+ {{- highlight .Inner "bash" -}} +
diff --git a/layouts/_default/_markup/render-codeblock-commandroot.html b/layouts/_default/_markup/render-codeblock-commandroot.html new file mode 100644 index 0000000..e3d93ad --- /dev/null +++ b/layouts/_default/_markup/render-codeblock-commandroot.html @@ -0,0 +1,3 @@ +
+ {{- highlight .Inner "bash" -}} +
diff --git a/static/css/custom.css b/static/css/custom.css index eaf156b..d09bd0d 100644 --- a/static/css/custom.css +++ b/static/css/custom.css @@ -1,3 +1,5 @@ +@import 'syntax.css'; + /* override page max-width */ .page { max-width: 72rem; @@ -133,3 +135,32 @@ body.dark .notice { top: 0.125em; position: relative; } + +/* Insert prompt char ::before on every line in command codeblocks */ +.command .line {display: inherit;} +.command .line::before { + color: var(--base07); + content: "$ "; +} + +.commandroot .line {display: inherit;} +.commandroot .line::before { + color: var(--base08); + content: "# "; +} + +/* Insert prompt char ::before on first line in cmd-session codeblocks +(These are useful for showing returned values from commands) +*/ +.command-session .line {display: inherit;} +.command-session code::before { + color: var(--base07); + content: "$ "; +} + +.commandroot-session .line {display: inherit;} +.commandroot-session code::before { + color: var(--base08); + content: "# "; +} + diff --git a/static/css/palettes/runtimeterror.css b/static/css/palettes/runtimeterror.css index ce408d4..c12bebf 100644 --- a/static/css/palettes/runtimeterror.css +++ b/static/css/palettes/runtimeterror.css @@ -9,8 +9,8 @@ --base04: #959494; /* alt foreground */ --base05: #d8d8d8; /* foreground */ --base06: #e8e8e8; - --base07: #f8f8f8; - --base08: #ab4642; + --base07: #5f8700; /* user prompt */ + --base08: #ab4642; /* root prompt */ --base09: #dc9656; --base0A: #f7ca88; /* highlights */ --base0B: #772a28; /* primary accent */ diff --git a/static/css/syntax.css b/static/css/syntax.css new file mode 100644 index 0000000..2920331 --- /dev/null +++ b/static/css/syntax.css @@ -0,0 +1,86 @@ +/* Background */ .bg { color: #f8f8f2; background-color: #272822; } +/* PreWrapper */ .chroma { color: #f8f8f2; background-color: #272822; } +/* Other */ .chroma .x { } +/* Error */ .chroma .err { color: #960050; background-color: #1e0010 } +/* CodeLine */ .chroma .cl { } +/* LineLink */ .chroma .lnlinks { outline: none; text-decoration: none; color: inherit } +/* LineTableTD */ .chroma .lntd { vertical-align: top; padding: 0; margin: 0; border: 0; } +/* LineTable */ .chroma .lntable { border-spacing: 0; padding: 0; margin: 0; border: 0; } +/* LineHighlight */ .chroma .hl { background-color: #ffffcc } +/* LineNumbersTable */ .chroma .lnt { white-space: pre; -webkit-user-select: none; user-select: none; margin-right: 0.4em; padding: 0 0.4em 0 0.4em;color: #7f7f7f } +/* LineNumbers */ .chroma .ln { white-space: pre; -webkit-user-select: none; user-select: none; margin-right: 0.4em; padding: 0 0.4em 0 0.4em;color: #7f7f7f } +/* Line */ .chroma .line { display: flex; } +/* Keyword */ .chroma .k { color: #66d9ef } +/* KeywordConstant */ .chroma .kc { color: #66d9ef } +/* KeywordDeclaration */ .chroma .kd { color: #66d9ef } +/* KeywordNamespace */ .chroma .kn { color: #f92672 } +/* KeywordPseudo */ .chroma .kp { color: #66d9ef } +/* KeywordReserved */ .chroma .kr { color: #66d9ef } +/* KeywordType */ .chroma .kt { color: #66d9ef } +/* Name */ .chroma .n { } +/* NameAttribute */ .chroma .na { color: #a6e22e } +/* NameBuiltin */ .chroma .nb { } +/* NameBuiltinPseudo */ .chroma .bp { } +/* NameClass */ .chroma .nc { color: #a6e22e } +/* NameConstant */ .chroma .no { color: #66d9ef } +/* NameDecorator */ .chroma .nd { color: #a6e22e } +/* NameEntity */ .chroma .ni { } +/* NameException */ .chroma .ne { color: #a6e22e } +/* NameFunction */ .chroma .nf { color: #a6e22e } +/* NameFunctionMagic */ .chroma .fm { } +/* NameLabel */ .chroma .nl { } +/* NameNamespace */ .chroma .nn { } +/* NameOther */ .chroma .nx { color: #a6e22e } +/* NameProperty */ .chroma .py { } +/* NameTag */ .chroma .nt { color: #f92672 } +/* NameVariable */ .chroma .nv { } +/* NameVariableClass */ .chroma .vc { } +/* NameVariableGlobal */ .chroma .vg { } +/* NameVariableInstance */ .chroma .vi { } +/* NameVariableMagic */ .chroma .vm { } +/* Literal */ .chroma .l { color: #ae81ff } +/* LiteralDate */ .chroma .ld { color: #e6db74 } +/* LiteralString */ .chroma .s { color: #e6db74 } +/* LiteralStringAffix */ .chroma .sa { color: #e6db74 } +/* LiteralStringBacktick */ .chroma .sb { color: #e6db74 } +/* LiteralStringChar */ .chroma .sc { color: #e6db74 } +/* LiteralStringDelimiter */ .chroma .dl { color: #e6db74 } +/* LiteralStringDoc */ .chroma .sd { color: #e6db74 } +/* LiteralStringDouble */ .chroma .s2 { color: #e6db74 } +/* LiteralStringEscape */ .chroma .se { color: #ae81ff } +/* LiteralStringHeredoc */ .chroma .sh { color: #e6db74 } +/* LiteralStringInterpol */ .chroma .si { color: #e6db74 } +/* LiteralStringOther */ .chroma .sx { color: #e6db74 } +/* LiteralStringRegex */ .chroma .sr { color: #e6db74 } +/* LiteralStringSingle */ .chroma .s1 { color: #e6db74 } +/* LiteralStringSymbol */ .chroma .ss { color: #e6db74 } +/* LiteralNumber */ .chroma .m { color: #ae81ff } +/* LiteralNumberBin */ .chroma .mb { color: #ae81ff } +/* LiteralNumberFloat */ .chroma .mf { color: #ae81ff } +/* LiteralNumberHex */ .chroma .mh { color: #ae81ff } +/* LiteralNumberInteger */ .chroma .mi { color: #ae81ff } +/* LiteralNumberIntegerLong */ .chroma .il { color: #ae81ff } +/* LiteralNumberOct */ .chroma .mo { color: #ae81ff } +/* Operator */ .chroma .o { color: #f92672 } +/* OperatorWord */ .chroma .ow { color: #f92672 } +/* Punctuation */ .chroma .p { } +/* Comment */ .chroma .c { color: #75715e } +/* CommentHashbang */ .chroma .ch { color: #75715e } +/* CommentMultiline */ .chroma .cm { color: #75715e } +/* CommentSingle */ .chroma .c1 { color: #75715e } +/* CommentSpecial */ .chroma .cs { color: #75715e } +/* CommentPreproc */ .chroma .cp { color: #75715e } +/* CommentPreprocFile */ .chroma .cpf { color: #75715e } +/* Generic */ .chroma .g { } +/* GenericDeleted */ .chroma .gd { color: #f92672 } +/* GenericEmph */ .chroma .ge { font-style: italic } +/* GenericError */ .chroma .gr { } +/* GenericHeading */ .chroma .gh { } +/* GenericInserted */ .chroma .gi { color: #a6e22e } +/* GenericOutput */ .chroma .go { } +/* GenericPrompt */ .chroma .gp { } +/* GenericStrong */ .chroma .gs { font-weight: bold } +/* GenericSubheading */ .chroma .gu { color: #75715e } +/* GenericTraceback */ .chroma .gt { } +/* GenericUnderline */ .chroma .gl { } +/* TextWhitespace */ .chroma .w { } From e82a0ad9378341e501cf28f7d261cb2701c206f6 Mon Sep 17 00:00:00 2001 From: John Bowdre Date: Mon, 16 Oct 2023 13:13:58 -0500 Subject: [PATCH 04/53] improve readability of code line highlights --- static/css/syntax.css | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/static/css/syntax.css b/static/css/syntax.css index 2920331..41c6633 100644 --- a/static/css/syntax.css +++ b/static/css/syntax.css @@ -6,7 +6,7 @@ /* LineLink */ .chroma .lnlinks { outline: none; text-decoration: none; color: inherit } /* LineTableTD */ .chroma .lntd { vertical-align: top; padding: 0; margin: 0; border: 0; } /* LineTable */ .chroma .lntable { border-spacing: 0; padding: 0; margin: 0; border: 0; } -/* LineHighlight */ .chroma .hl { background-color: #ffffcc } +/* LineHighlight */ .chroma .hl { background-color: #ffffcc3f } /* LineNumbersTable */ .chroma .lnt { white-space: pre; -webkit-user-select: none; user-select: none; margin-right: 0.4em; padding: 0 0.4em 0 0.4em;color: #7f7f7f } /* LineNumbers */ .chroma .ln { white-space: pre; -webkit-user-select: none; user-select: none; margin-right: 0.4em; padding: 0 0.4em 0 0.4em;color: #7f7f7f } /* Line */ .chroma .line { display: flex; } From 741d4fd5885763cc5f1c4ba93f977348cdfeb913 Mon Sep 17 00:00:00 2001 From: John Bowdre Date: Mon, 16 Oct 2023 16:48:55 -0500 Subject: [PATCH 05/53] update posts for new code block stuff --- .../index.md | 51 +++-- .../index.md | 45 ---- .../index.md | 16 +- .../index.md | 22 +- .../index.md | 2 +- .../index.md | 152 +++++++------ .../index.md | 22 +- .../posts/cat-file-without-comments/index.md | 16 +- .../index.md | 97 ++++---- .../index.md | 59 +++-- .../index.md | 40 ++-- .../index.md | 89 ++++---- .../index.md | 4 +- content/posts/esxi-arm-on-quartz64/index.md | 68 +++--- .../index.md | 103 ++++----- .../index.md | 12 +- .../fixing-403-error-ssc-8-6-vra-idm/index.md | 24 +- .../index.md | 9 +- .../getting-started-vra-rest-api/index.md | 180 +++++++-------- .../gitea-self-hosted-git-server/index.md | 89 ++++---- .../index.md | 77 ++++--- .../index.md | 20 +- .../index.md | 66 +++--- .../index.md | 108 +++++---- .../index.md | 38 ++-- .../index.md | 6 +- .../index.md | 18 +- .../index.md | 4 +- .../index.md | 4 +- .../index.md | 30 +-- .../index.md | 2 +- .../index.md | 38 ++-- .../index.md | 88 ++++---- .../index.md | 18 +- .../index.md | 8 +- .../index.md | 44 ++-- .../index.md | 32 +-- .../index.md | 89 ++++---- .../index.md | 2 +- .../index.md | 52 +++-- .../posts/tailscale-on-vmware-photon/index.md | 6 +- .../index.md | 212 ++++++++++-------- .../index.md | 13 +- .../index.md | 8 +- .../index.md | 40 ++-- .../index.md | 20 +- .../virtuallypotato-runtimeterror/index.md | 2 +- .../vmware-home-lab-on-intel-nuc-9/index.md | 36 +-- .../index.md | 2 +- .../index.md | 30 +-- .../index.md | 18 +- .../index.md | 38 ++-- .../index.md | 36 +-- 53 files changed, 1173 insertions(+), 1132 deletions(-) delete mode 100644 content/posts/accessing-tce-cluster-from-new-device/index.md diff --git a/content/posts/3d-modeling-and-printing-on-chrome-os/index.md b/content/posts/3d-modeling-and-printing-on-chrome-os/index.md index f0f4b76..6ff2f16 100644 --- a/content/posts/3d-modeling-and-printing-on-chrome-os/index.md +++ b/content/posts/3d-modeling-and-printing-on-chrome-os/index.md @@ -13,38 +13,41 @@ title: 3D Modeling and Printing on Chrome OS I've got an Ender 3 Pro 3D printer, a Raspberry Pi 4, and a Pixel Slate. I can't interface directly with the printer over USB from the Slate (plus having to be physically connected to things is like so lame) so I installed [Octoprint on the Raspberry Pi](https://github.com/guysoft/OctoPi) and connected that to the printer's USB interface. This gave me a pretty web interface for controlling the printer - but it's only accessible over the local network. I also installed [The Spaghetti Detective](https://www.thespaghettidetective.com/) to allow secure remote control of the printer, with the added bonus of using AI magic and a cheap camera to detect and abort failing prints. -That's a pretty sweet setup, but I still needed a way to convert STL 3D models into GCODE files which the printer can actually understand. And what if I want to create my own designs? +That's a pretty sweet setup, but I still needed a way to convert STL 3D models into GCODE files which the printer can actually understand. And what if I want to create my own designs? -Enter "Crostini," Chrome OS's [Linux (Beta) feature](https://chromium.googlesource.com/chromiumos/docs/+/master/containers_and_vms.md). It consists of a hardened Linux VM named `termina` which runs (by default) a Debian Buster LXD container named `penguin` (though you can spin up just about any container for which you can find an [image](https://us.images.linuxcontainers.org/)) and some fancy plumbing to let Chrome OS and Linux interact in specific clearly-defined ways. It's a brilliant balance between offering the flexibility of Linux while preserving Chrome OS's industry-leading security posture. +Enter "Crostini," Chrome OS's [Linux (Beta) feature](https://chromium.googlesource.com/chromiumos/docs/+/master/containers_and_vms.md). It consists of a hardened Linux VM named `termina` which runs (by default) a Debian Buster LXD container named `penguin` (though you can spin up just about any container for which you can find an [image](https://us.images.linuxcontainers.org/)) and some fancy plumbing to let Chrome OS and Linux interact in specific clearly-defined ways. It's a brilliant balance between offering the flexibility of Linux while preserving Chrome OS's industry-leading security posture. ![Neofetch in the Crostini terminal](lhTnVwCO3.png) -There are plenty of great guides (like [this one](https://www.computerworld.com/article/3314739/linux-apps-on-chrome-os-an-easy-to-follow-guide.html)) on how to get started with Linux on Chrome OS so I won't rehash those steps here. +There are plenty of great guides (like [this one](https://www.computerworld.com/article/3314739/linux-apps-on-chrome-os-an-easy-to-follow-guide.html)) on how to get started with Linux on Chrome OS so I won't rehash those steps here. -One additional step you will probably want to take is make sure that your Chromebook is configured to enable hyperthreading, as it may have [hyperthreading disabled by default](https://support.google.com/chromebook/answer/9340236). Just plug `chrome://flags/#scheduler-configuration` into Chrome's address bar, set it to `Enables Hyper-Threading on relevant CPUs`, and then click the button to restart your Chromebook. You'll thank me later. +One additional step you will probably want to take is make sure that your Chromebook is configured to enable hyperthreading, as it may have [hyperthreading disabled by default](https://support.google.com/chromebook/answer/9340236). Just plug `chrome://flags/#scheduler-configuration` into Chrome's address bar, set it to `Enables Hyper-Threading on relevant CPUs`, and then click the button to restart your Chromebook. You'll thank me later. ![Enabling hyperthreading](LHax6lAwh.png) ### The Software -I settled on using [FreeCAD](https://www.freecadweb.org/) for parametric modeling and [Ultimaker Cura](https://ultimaker.com/software/ultimaker-cura) for my GCODE slicer, but unfortunately getting them working cleanly wasn't entirely straightforward. +I settled on using [FreeCAD](https://www.freecadweb.org/) for parametric modeling and [Ultimaker Cura](https://ultimaker.com/software/ultimaker-cura) for my GCODE slicer, but unfortunately getting them working cleanly wasn't entirely straightforward. #### FreeCAD Installing FreeCAD is as easy as: -```shell -$ sudo apt update -$ sudo apt install freecad +```command +sudo apt update +sudo apt install freecad ``` But launching `/usr/bin/freecad` caused me some weird graphical defects which rendered the application unusable. I found that I needed to pass the `LIBGL_DRI3_DISABLE=1` environment variable to eliminate these glitches: -```shell -$ env 'LIBGL_DRI3_DISABLE=1' /usr/bin/freecad & +```command +env 'LIBGL_DRI3_DISABLE=1' /usr/bin/freecad & ``` To avoid having to type that every time I wished to launch the app, I inserted this line at the bottom of my `~/.bashrc` file: -```shell +```command alias freecad="env 'LIBGL_DRI3_DISABLE=1' /usr/bin/freecad &" ``` To be able to start FreeCAD from the Chrome OS launcher with that environment variable intact, edit it into the `Exec` line of the `/usr/share/applications/freecad.desktop` file: -```shell -$ sudo vi /usr/share/applications/freecad.desktop +```command +sudo vi /usr/share/applications/freecad.desktop +``` + +```cfg {linenos=true} [Desktop Entry] Version=1.0 Name=FreeCAD @@ -64,32 +67,32 @@ GenericName[de_DE]=Feature-basierter parametrischer Modellierer Comment[de_DE]=Feature-basierter parametrischer Modellierer MimeType=application/x-extension-fcstd ``` -That's it! Get on with your 3D-modeling bad self. +That's it! Get on with your 3D-modeling bad self. ![FreeCAD](qDTXt1jp3.png) -Now that you've got a model, be sure to [export it as an STL mesh](https://wiki.freecadweb.org/Export_to_STL_or_OBJ) so you can import it into your slicer. +Now that you've got a model, be sure to [export it as an STL mesh](https://wiki.freecadweb.org/Export_to_STL_or_OBJ) so you can import it into your slicer. #### Ultimaker Cura -Cura isn't available from the default repos so you'll need to download the AppImage from https://github.com/Ultimaker/Cura/releases/tag/4.7.1. You can do this in Chrome and then use the built-in File app to move the file into your 'My Files > Linux Files' directory. Feel free to put it in a subfolder if you want to keep things organized - I stash all my AppImages in `~/Applications/`. +Cura isn't available from the default repos so you'll need to download the AppImage from https://github.com/Ultimaker/Cura/releases/tag/4.7.1. You can do this in Chrome and then use the built-in File app to move the file into your 'My Files > Linux Files' directory. Feel free to put it in a subfolder if you want to keep things organized - I stash all my AppImages in `~/Applications/`. To be able to actually execute the AppImage you'll need to adjust the permissions with 'chmod +x': -```shell -$ chmod +x ~/Applications/Ultimaker_Cura-4.7.1.AppImage +```command +chmod +x ~/Applications/Ultimaker_Cura-4.7.1.AppImage ``` You can then start up the app by calling the file directly: -```shell -$ ~/Applications/Ultimaker_Cura-4.7.1.AppImage & +```command +~/Applications/Ultimaker_Cura-4.7.1.AppImage & ``` AppImages don't automatically appear in the Chrome OS launcher so you'll need to create its `.desktop` file. You can do this manually if you want, but I found it a lot easier to leverage `menulibre`: -```shell -$ sudo apt update && sudo apt install menulibre -$ menulibre +```command +sudo apt update && sudo apt install menulibre +menulibre ``` Just plug in the relevant details (you can grab the appropriate icon [here](https://github.com/Ultimaker/Cura/blob/master/icons/cura-128.png)), hit the filing cabinet Save icon, and you should then be able to search for Cura from the Chrome OS launcher. ![Using menulibre to create the launcher shortcut](VTISYOKHO.png) ![Ultimaker Cura](f8nRJcyI6.png) -From there, just import the STL mesh, configure the appropriate settings, slice, and save the resulting GCODE. You can then just upload the GCODE straight to The Spaghetti Detective and kick off the print. +From there, just import the STL mesh, configure the appropriate settings, slice, and save the resulting GCODE. You can then just upload the GCODE straight to The Spaghetti Detective and kick off the print. ![Successful print, designed and sliced on Chrome OS!](2g57odtq2.jpeg) diff --git a/content/posts/accessing-tce-cluster-from-new-device/index.md b/content/posts/accessing-tce-cluster-from-new-device/index.md deleted file mode 100644 index 4b43a12..0000000 --- a/content/posts/accessing-tce-cluster-from-new-device/index.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: "Accessing a Tanzu Community Edition Kubernetes Cluster from a new device" # Title of the blog post. -date: 2022-02-01T10:58:57-06:00 # Date of post creation. -# lastmod: 2022-02-01T10:58:57-06:00 # Date when last modified -description: "The Tanzu Community Edition documentation does a great job of explaining how to authenticate to a newly-deployed cluster at the tail end of the installation steps, but how do you log in from another system?" # Description used for search engine. -featured: false # Sets if post is a featured post, making appear on the home page side bar. -draft: true # Sets whether to render this page. Draft of true will not be rendered. -toc: false # Controls if a table of contents should be generated for first-level links automatically. -usePageBundles: true -# menu: main -# featureImage: "file.png" # Sets featured image on blog post. -# featureImageAlt: 'Description of image' # Alternative text for featured image. -# featureImageCap: 'This is the featured image.' # Caption (optional). -# thumbnail: "thumbnail.png" # Sets thumbnail image appearing inside card on homepage. -# shareImage: "share.png" # Designate a separate image for social media sharing. -codeLineNumbers: false # Override global value for showing of line numbers within code block. -series: Tips -tags: - - vmware - - kubernetes - - tanzu -comment: true # Disable comment if false. ---- -When I [recently set up my Tanzu Community Edition environment](/tanzu-community-edition-k8s-homelab/), I did so from a Linux VM since I knew that my Chromebook Linux environment wouldn't support the `kind` bootstrap cluster used for the deployment. But now I'd like to be able to connect to the cluster directly using the `tanzu` and `kubectl` CLI tools. How do I get the appropriate cluster configuration over to my Chromebook? - -The Tanzu CLI actually makes that pretty easy. I just run these commands on my Linux VM to export the `kubeconfig` of my management (`tce-mgmt`) and workload (`tce-work`) clusters to a pair of files: -```shell -tanzu management-cluster kubeconfig get --admin --export-file tce-mgmt-kubeconfig.yaml -tanzu cluster kubeconfig get tce-work --admin --export-file tce-work-kubeconfig.yaml -``` - -I could then use `scp` to pull the files from the VM into my local Linux environment. I then needed to [install `kubectl`](/tanzu-community-edition-k8s-homelab/#kubectl-binary) and the [`tanzu` CLI](/tanzu-community-edition-k8s-homelab/#tanzu-cli) (making sure to also [enable shell auto-completion](/enable-tanzu-cli-auto-completion-bash-zsh/) along the way!), and I could import the configurations locally: - -```shell -❯ tanzu login --kubeconfig tce-mgmt-kubeconfig.yaml --context tce-mgmt-admin@tce-mgmt --name tce-mgmt -✔ successfully logged in to management cluster using the kubeconfig tce-mgmt - -❯ tanzu login --kubeconfig tce-work-kubeconfig.yaml --context tce-work-admin@tce-work --name tce-work -✔ successfully logged in to management cluster using the kubeconfig tce-work -``` - - - - - diff --git a/content/posts/adding-vm-notes-and-custom-attributes-with-vra8/index.md b/content/posts/adding-vm-notes-and-custom-attributes-with-vra8/index.md index 28ca400..02040ad 100644 --- a/content/posts/adding-vm-notes-and-custom-attributes-with-vra8/index.md +++ b/content/posts/adding-vm-notes-and-custom-attributes-with-vra8/index.md @@ -11,7 +11,7 @@ tags: title: Adding VM Notes and Custom Attributes with vRA8 --- -*In [past posts](/series/vra8), I started by [creating a basic deployment infrastructure](/vra8-custom-provisioning-part-one) in Cloud Assembly and using tags to group those resources. I then [wrote an integration](/integrating-phpipam-with-vrealize-automation-8) to let vRA8 use phpIPAM for static address assignments. I [implemented a vRO workflow](/vra8-custom-provisioning-part-two) for generating unique VM names which fit an organization's established naming standard, and then [extended the workflow](/vra8-custom-provisioning-part-three) to avoid any naming conflicts in Active Directory and DNS. And, finally, I [created an intelligent provisioning request form in Service Broker](/vra8-custom-provisioning-part-four) to make it easy for users to get the servers they need. That's got the core functionality pretty well sorted, so moving forward I'll be detailing additions that enable new capabilities and enhance the experience.* +*In [past posts](/series/vra8), I started by [creating a basic deployment infrastructure](/vra8-custom-provisioning-part-one) in Cloud Assembly and using tags to group those resources. I then [wrote an integration](/integrating-phpipam-with-vrealize-automation-8) to let vRA8 use phpIPAM for static address assignments. I [implemented a vRO workflow](/vra8-custom-provisioning-part-two) for generating unique VM names which fit an organization's established naming standard, and then [extended the workflow](/vra8-custom-provisioning-part-three) to avoid any naming conflicts in Active Directory and DNS. And, finally, I [created an intelligent provisioning request form in Service Broker](/vra8-custom-provisioning-part-four) to make it easy for users to get the servers they need. That's got the core functionality pretty well sorted, so moving forward I'll be detailing additions that enable new capabilities and enhance the experience.* In this post, I'll describe how to get certain details from the Service Broker request form and into the VM's properties in vCenter. The obvious application of this is adding descriptive notes so I can remember what purpose a VM serves, but I will also be using [Custom Attributes](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vcenterhost.doc/GUID-73606C4C-763C-4E27-A1DA-032E4C46219D.html) to store the server's Point of Contact information and a record of which ticketing system request resulted in the server's creation. @@ -19,9 +19,9 @@ In this post, I'll describe how to get certain details from the Service Broker r I'll start this by adding a few new inputs to the cloud template in Cloud Assembly. ![New inputs in Cloud Assembly](F3Wkd3VT.png) -I'm using a basic regex on the `poc_email` field to make sure that the user's input is *probably* a valid email address in the format `[some string]@[some string].[some string]`. +I'm using a basic regex on the `poc_email` field to make sure that the user's input is *probably* a valid email address in the format `[some string]@[some string].[some string]`. -```yaml +```yaml {linenos=true} inputs: [...] description: @@ -48,9 +48,9 @@ inputs: I'll also need to add these to the `resources` section of the template so that they will get passed along with the deployment properties. ![New resource properties](N7YllJkxS.png) -I'm actually going to combine the `poc_name` and `poc_email` fields into a single `poc` string. +I'm actually going to combine the `poc_name` and `poc_email` fields into a single `poc` string. -```yaml +```yaml {linenos=true} resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine @@ -73,14 +73,14 @@ I can then go to Service Broker and drag the new fields onto the Custom Form can Okay, so I've got the information I want to pass on to vCenter. Now I need to whip up a new workflow in vRO that will actually do that (after [telling vRO how to connect to the vCenter](/vra8-custom-provisioning-part-two#interlude-connecting-vro-to-vcenter), of course). I'll want to call this after the VM has been provisioned, so I'll cleverly call the workflow "VM Post-Provisioning". ![Naming the new workflow](X9JhgWx8x.png) -The workflow will have a single input from vRA, `inputProperties` of type `Properties`. +The workflow will have a single input from vRA, `inputProperties` of type `Properties`. ![Workflow input](zHrp6GPcP.png) The first thing this workflow needs to do is parse `inputProperties (Properties)` to get the name of the VM, and it will then use that information to query vCenter and grab the corresponding VM object. So I'll add a scriptable task item to the workflow canvas and call it `Get VM Object`. It will take `inputProperties (Properties)` as its sole input, and output a new variable called `vm` of type `VC:VirtualMachine`. ![Get VM Object action](5ATk99aPW.png) The script for this task is fairly straightforward: -```js +```js {linenos=true} // JavaScript: Get VM Object // Inputs: inputProperties (Properties) // Outputs: vm (VC:VirtualMachine) @@ -99,7 +99,7 @@ The first part of the script creates a new VM config spec, inserts the descripti The second part uses a built-in action to set the `Point of Contact` and `Ticket` custom attributes accordingly. -```js +```js {linenos=true} // Javascript: Set Notes // Inputs: vm (VC:VirtualMachine), inputProperties (Properties) // Outputs: None diff --git a/content/posts/adguard-home-in-docker-on-photon-os/index.md b/content/posts/adguard-home-in-docker-on-photon-os/index.md index 8eeea46..21e01cb 100644 --- a/content/posts/adguard-home-in-docker-on-photon-os/index.md +++ b/content/posts/adguard-home-in-docker-on-photon-os/index.md @@ -34,7 +34,7 @@ Once the VM is created, I power it on and hop into the web console. The default ### Configure Networking My next step was to configure a static IP address by creating `/etc/systemd/network/10-static-en.network` and entering the following contents: -```conf +```cfg {linenos=true} [Match] Name=eth0 @@ -48,7 +48,7 @@ By the way, that `192.168.1.5` address is my Windows DC/DNS server that I use fo I also disabled DHCP by setting `DHCP=no` in `/etc/systemd/network/99-dhcp-en.network`: -```conf +```cfg {linenos=true} [Match] Name=e* @@ -70,26 +70,26 @@ Now that I'm in, I run `tdnf update` to make sure the VM is fully up to date. ### Install docker-compose Photon OS ships with Docker preinstalled, but I need to install `docker-compose` on my own to simplify container deployment. Per the [install instructions](https://docs.docker.com/compose/install/#install-compose), I run: -```shell +```commandroot curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose ``` And then verify that it works: -```shell -root@adguard [ ~]# docker-compose --version +```commandroot-session +docker-compose --version docker-compose version 1.29.2, build 5becea4c ``` I'll also want to enable and start Docker: -```shell +```commandroot systemctl enable docker systemctl start docker ``` ### Disable DNSStubListener By default, the `resolved` daemon is listening on `127.0.0.53:53` and will prevent docker from binding to that port. Fortunately it's [pretty easy](https://github.com/pi-hole/docker-pi-hole#installing-on-ubuntu) to disable the `DNSStubListener` and free up the port: -```shell +```commandroot sed -r -i.orig 's/#?DNSStubListener=yes/DNSStubListener=no/g' /etc/systemd/resolved.conf rm /etc/resolv.conf && ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf systemctl restart systemd-resolved @@ -99,14 +99,14 @@ systemctl restart systemd-resolved Okay, now for the fun part. I create a directory for AdGuard to live in, and then create a `docker-compose.yaml` therein: -```shell +```commandroot mkdir ~/adguard cd ~/adguard vi docker-compose.yaml ``` And I define the container: -```yaml +```yaml {linenos=true} version: "3" services: @@ -133,8 +133,8 @@ services: Then I can fire it up with `docker-compose up --detach`: -```shell -root@adguard [ ~/adguard ]# docker-compose up --detach +```commandroot-session +docker-compose up --detach Creating network "adguard_default" with the default driver Pulling adguard (adguard/adguardhome:latest)... latest: Pulling from adguard/adguardhome diff --git a/content/posts/automatic-unattended-expansion-of-linux-root-lvm-volume-to-fill-disk/index.md b/content/posts/automatic-unattended-expansion-of-linux-root-lvm-volume-to-fill-disk/index.md index cb74f9b..a88dae9 100644 --- a/content/posts/automatic-unattended-expansion-of-linux-root-lvm-volume-to-fill-disk/index.md +++ b/content/posts/automatic-unattended-expansion-of-linux-root-lvm-volume-to-fill-disk/index.md @@ -29,7 +29,7 @@ I found a great script [here](https://github.com/alpacacode/Homebrewn-Scripts/bl When I cobbled together this script I was primarily targeting the Enterprise Linux (RHEL, CentOS) systems that I work with in my environment, and those happened to have MBR partition tables. This script would need to be modified a bit to work with GPT partitions like you might find on Ubuntu. {{% /notice %}} -```shell +```shell {linenos=true} #!/bin/bash # This will attempt to automatically detect the LVM logical volume where / is mounted and then # expand the underlying physical partition, LVM physical volume, LVM volume group, LVM logical diff --git a/content/posts/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance/index.md b/content/posts/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance/index.md index 335eed1..7487d42 100644 --- a/content/posts/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance/index.md +++ b/content/posts/bitwarden-password-manager-self-hosted-on-free-google-cloud-instance/index.md @@ -40,8 +40,11 @@ When I originally wrote this post back in September 2018, the containerized BitW 1. Log in to the [Google Domain admin portal](https://domains.google.com/registrar) and [create a new Dynamic DNS record](https://domains.google.com/registrar). This will provide a username and password specific for that record. 2. Log in to the GCE instance and run `sudo apt-get update` followed by `sudo apt-get install ddclient`. Part of the install process prompts you to configure things... just accept the defaults and move on. 3. Edit the `ddclient` config file to look like this, substituting the username, password, and FDQN from Google Domains: -```shell -$ sudo vi /etc/ddclient.conf +```command +sudo vim /etc/ddclient.conf +``` + +```cfg {linenos=true,hl_lines=["10-12"]} # Configuration file for ddclient generated by debconf # # /etc/ddclient.conf @@ -57,7 +60,7 @@ $ sudo vi /etc/ddclient.conf ``` 4. `sudo vi /etc/default/ddclient` and make sure that `run_daemon="true"`: -```shell +```cfg {linenos=true,hl_lines=16} # Configuration for ddclient scripts # generated from debconf on Sat Sep 8 21:58:02 UTC 2018 # @@ -80,21 +83,21 @@ run_daemon="true" daemon_interval="300" ``` 5. Restart the `ddclient` service - twice for good measure (daemon mode only gets activated on the second go *because reasons*): -```shell -$ sudo systemctl restart ddclient -$ sudo systemctl restart ddclient +```command +sudo systemctl restart ddclient +sudo systemctl restart ddclient ``` 6. After a few moments, refresh the Google Domains page to verify that your instance's external IP address is showing up on the new DDNS record. ### Install Docker *Steps taken from [here](https://docs.docker.com/install/linux/docker-ce/debian/).* 1. Update `apt` package index: -```shell -$ sudo apt-get update +```command +sudo apt-get update ``` 2. Install package management prereqs: -```shell -$ sudo apt-get install \ +```command-session +sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ @@ -102,47 +105,47 @@ $ sudo apt-get install \ software-properties-common ``` 3. Add Docker GPG key: -```shell -$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - +```command +curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - ``` 4. Add the Docker repo: -```shell -$ sudo add-apt-repository \ +```command-session +sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/debian \ $(lsb_release -cs) \ stable" ``` 5. Update apt index again: -```shell -$ sudo apt-get update +```command +sudo apt-get update ``` 6. Install Docker: -```shell -$ sudo apt-get install docker-ce +```command +sudo apt-get install docker-ce ``` ### Install Certbot and generate SSL cert *Steps taken from [here](https://certbot.eff.org/instructions?ws=other&os=debianbuster).* 1. Install Certbot: -```shell -$ sudo apt-get install certbot +```command +sudo apt-get install certbot ``` 2. Generate certificate: -```shell -$ sudo certbot certonly --standalone -d [FQDN] +```command +sudo certbot certonly --standalone -d [FQDN] ``` 3. Create a directory to store the new certificates and copy them there: -```shell -$ sudo mkdir -p /ssl/keys/ -$ sudo cp -p /etc/letsencrypt/live/[FQDN]/fullchain.pem /ssl/keys/ -$ sudo cp -p /etc/letsencrypt/live/[FQDN]/privkey.pem /ssl/keys/ +```command +sudo mkdir -p /ssl/keys/ +sudo cp -p /etc/letsencrypt/live/[FQDN]/fullchain.pem /ssl/keys/ +sudo cp -p /etc/letsencrypt/live/[FQDN]/privkey.pem /ssl/keys/ ``` ### Set up vaultwarden *Using the container image available [here](https://github.com/dani-garcia/vaultwarden).* 1. Let's just get it up and running first: -```shell -$ sudo docker run -d --name vaultwarden \ +```command-session +sudo docker run -d --name vaultwarden \ -e ROCKET_TLS={certs='"/ssl/fullchain.pem", key="/ssl/privkey.pem"}' \ -e ROCKET_PORT='8000' \ -v /ssl/keys/:/ssl/ \ @@ -154,9 +157,9 @@ $ sudo docker run -d --name vaultwarden \ 2. At this point you should be able to point your web browser at `https://[FQDN]` and see the BitWarden login screen. Click on the Create button and set up a new account. Log in, look around, add some passwords, etc. Everything should basically work just fine. 3. Unless you want to host passwords for all of the Internet you'll probably want to disable signups at some point by adding the `env` option `SIGNUPS_ALLOWED=false`. And you'll need to set `DOMAIN=https://[FQDN]` if you want to use U2F authentication: ```shell -$ sudo docker stop vaultwarden -$ sudo docker rm vaultwarden -$ sudo docker run -d --name vaultwarden \ +sudo docker stop vaultwarden +sudo docker rm vaultwarden +sudo docker run -d --name vaultwarden \ -e ROCKET_TLS={certs='"/ssl/fullchain.pem",key="/ssl/privkey.pem"'} \ -e ROCKET_PORT='8000' \ -e SIGNUPS_ALLOWED=false \ @@ -170,30 +173,39 @@ $ sudo docker run -d --name vaultwarden \ ### Install vaultwarden as a service *So we don't have to keep manually firing this thing off.* -1. Create a script to stop, remove, update, and (re)start the `vaultwarden` container: +1. Create a script at `/usr/local/bin/start-vaultwarden.sh` to stop, remove, update, and (re)start the `vaultwarden` container: +```command +sudo vim /usr/local/bin/start-vaultwarden.sh +``` + ```shell -$ sudo vi /usr/local/bin/start-vaultwarden.sh - #!/bin/bash +#!/bin/bash - docker stop vaultwarden - docker rm vaultwarden - docker pull vaultwarden/server +docker stop vaultwarden +docker rm vaultwarden +docker pull vaultwarden/server - docker run -d --name vaultwarden \ - -e ROCKET_TLS={certs='"/ssl/fullchain.pem",key="/ssl/privkey.pem"'} \ - -e ROCKET_PORT='8000' \ - -e SIGNUPS_ALLOWED=false \ - -e DOMAIN=https://[FQDN] \ - -v /ssl/keys/:/ssl/ \ - -v /bw-data/:/data/ \ - -v /icon_cache/ \ - -p 0.0.0.0:443:8000 \ - vaultwarden/server:latest -$ sudo chmod 744 /usr/local/bin/start-vaultwarden.sh +docker run -d --name vaultwarden \ + -e ROCKET_TLS={certs='"/ssl/fullchain.pem",key="/ssl/privkey.pem"'} \ + -e ROCKET_PORT='8000' \ + -e SIGNUPS_ALLOWED=false \ + -e DOMAIN=https://[FQDN] \ + -v /ssl/keys/:/ssl/ \ + -v /bw-data/:/data/ \ + -v /icon_cache/ \ + -p 0.0.0.0:443:8000 \ + vaultwarden/server:latest +``` + +```command +sudo chmod 744 /usr/local/bin/start-vaultwarden.sh ``` 2. And add it as a `systemd` service: -```shell -$ sudo vi /etc/systemd/system/vaultwarden.service +```command +sudo vim /etc/systemd/system/vaultwarden.service +``` + +```cfg [Unit] Description=BitWarden container Requires=docker.service @@ -206,26 +218,32 @@ $ sudo vi /etc/systemd/system/vaultwarden.service [Install] WantedBy=default.target -$ sudo chmod 644 /etc/systemd/system/vaultwarden.service +``` + +```command +sudo chmod 644 /etc/systemd/system/vaultwarden.service ``` 3. Try it out: -```shell -$ sudo systemctl start vaultwarden -$ sudo systemctl status vaultwarden - ● bitwarden.service - BitWarden container - Loaded: loaded (/etc/systemd/system/vaultwarden.service; enabled; vendor preset: enabled) - Active: deactivating (stop) since Sun 2018-09-09 03:43:20 UTC; 1s ago - Process: 13104 ExecStart=/usr/local/bin/bitwarden-start.sh (code=exited, status=0/SUCCESS) - Main PID: 13104 (code=exited, status=0/SUCCESS); Control PID: 13229 (docker) - Tasks: 5 (limit: 4915) - Memory: 9.7M - CPU: 375ms - CGroup: /system.slice/vaultwarden.service - └─control - └─13229 /usr/bin/docker stop vaultwarden +```command +sudo systemctl start vaultwarden +``` - Sep 09 03:43:20 vaultwarden vaultwarden-start.sh[13104]: Status: Image is up to date for vaultwarden/server:latest - Sep 09 03:43:20 vaultwarden vaultwarden-start.sh[13104]: ace64ca5294eee7e21be764ea1af9e328e944658b4335ce8721b99a33061d645 +```command-session +sudo systemctl status vaultwarden +● bitwarden.service - BitWarden container + Loaded: loaded (/etc/systemd/system/vaultwarden.service; enabled; vendor preset: enabled) + Active: deactivating (stop) since Sun 2018-09-09 03:43:20 UTC; 1s ago + Process: 13104 ExecStart=/usr/local/bin/bitwarden-start.sh (code=exited, status=0/SUCCESS) + Main PID: 13104 (code=exited, status=0/SUCCESS); Control PID: 13229 (docker) + Tasks: 5 (limit: 4915) + Memory: 9.7M + CPU: 375ms + CGroup: /system.slice/vaultwarden.service + └─control + └─13229 /usr/bin/docker stop vaultwarden + +Sep 09 03:43:20 vaultwarden vaultwarden-start.sh[13104]: Status: Image is up to date for vaultwarden/server:latest +Sep 09 03:43:20 vaultwarden vaultwarden-start.sh[13104]: ace64ca5294eee7e21be764ea1af9e328e944658b4335ce8721b99a33061d645 ``` ### Conclusion diff --git a/content/posts/bulk-import-vsphere-dvportgroups-to-phpipam/index.md b/content/posts/bulk-import-vsphere-dvportgroups-to-phpipam/index.md index fcd4711..58fb7bd 100644 --- a/content/posts/bulk-import-vsphere-dvportgroups-to-phpipam/index.md +++ b/content/posts/bulk-import-vsphere-dvportgroups-to-phpipam/index.md @@ -27,12 +27,12 @@ comment: true # Disable comment if false. I [recently wrote](/tanzu-community-edition-k8s-homelab/#a-real-workload---phpipam) about getting started with VMware's [Tanzu Community Edition](https://tanzucommunityedition.io/) and deploying [phpIPAM](https://phpipam.net/) as my first real-world Kubernetes workload. Well I've spent much of my time since then working on a script which would help to populate my phpIPAM instance with a list of networks to monitor. ### Planning and Exporting -The first step in making this work was to figure out which networks I wanted to import. We've got hundreds of different networks in use across our production vSphere environments. I focused only on those which are portgroups on distributed virtual switches since those configurations are pretty standardized (being vCenter constructs instead of configured on individual hosts). These dvPortGroups bear a naming standard which conveys all sorts of useful information, and it's easy and safe to rename any dvPortGroups which _don't_ fit the standard (unlike renaming portgroups on a standard virtual switch). +The first step in making this work was to figure out which networks I wanted to import. We've got hundreds of different networks in use across our production vSphere environments. I focused only on those which are portgroups on distributed virtual switches since those configurations are pretty standardized (being vCenter constructs instead of configured on individual hosts). These dvPortGroups bear a naming standard which conveys all sorts of useful information, and it's easy and safe to rename any dvPortGroups which _don't_ fit the standard (unlike renaming portgroups on a standard virtual switch). The standard naming convention is `[Site/Description] [Network Address]{/[Mask]}`. So the networks (across two virtual datacenters and two dvSwitches) look something like this: ![Production dvPortGroups approximated in my testing lab environment](dvportgroups.png) -Some networks have masks in the name, some don't; and some use an underscore (`_`) rather than a slash (`/`) to separate the network from the mask . Most networks correctly include the network address with a `0` in the last octet, but some use an `x` instead. And the VLANs associated with the networks have a varying number of digits. Consistency can be difficult so these are all things that I had to keep in mind as I worked on a solution which would make a true best effort at importing all of these. +Some networks have masks in the name, some don't; and some use an underscore (`_`) rather than a slash (`/`) to separate the network from the mask . Most networks correctly include the network address with a `0` in the last octet, but some use an `x` instead. And the VLANs associated with the networks have a varying number of digits. Consistency can be difficult so these are all things that I had to keep in mind as I worked on a solution which would make a true best effort at importing all of these. As long as the dvPortGroup names stick to this format I can parse the name to come up with a description as well as the IP space of the network. The dvPortGroup also carries information about the associated VLAN, which is useful information to have. And I can easily export this information with a simple PowerCLI query: @@ -53,7 +53,7 @@ VPOT8-Servers 172.20.10.32/27 VLAN 30 VPOT8-Servers 172.20.10.64_26 VLAN 40 ``` -In my [homelab](/vmware-home-lab-on-intel-nuc-9/), I only have a single vCenter. In production, we've got a handful of vCenters, and each manages the hosts in a given region. So I can use information about which vCenter hosts a dvPortGroup to figure out which region a network is in. When I import this data into phpIPAM, I can use the vCenter name to assign [remote scan agents](https://github.com/jbowdre/phpipam-agent-docker) to networks based on the region that they're in. I can also grab information about which virtual datacenter a dvPortGroup lives in, which I'll use for grouping networks into sites or sections. +In my [homelab](/vmware-home-lab-on-intel-nuc-9/), I only have a single vCenter. In production, we've got a handful of vCenters, and each manages the hosts in a given region. So I can use information about which vCenter hosts a dvPortGroup to figure out which region a network is in. When I import this data into phpIPAM, I can use the vCenter name to assign [remote scan agents](https://github.com/jbowdre/phpipam-agent-docker) to networks based on the region that they're in. I can also grab information about which virtual datacenter a dvPortGroup lives in, which I'll use for grouping networks into sites or sections. The vCenter can be found in the `Uid` property returned by `get-vdportgroup`: ```powershell @@ -96,7 +96,7 @@ I'm also going to head in to **Administration > IP Related Management > Sections ### Script time Well that's enough prep work; now it's time for the Python3 [script](https://github.com/jbowdre/misc-scripts/blob/main/Python/phpipam-bulk-import.py): -```python +```python {linenos=true} # The latest version of this script can be found on Github: # https://github.com/jbowdre/misc-scripts/blob/main/Python/phpipam-bulk-import.py @@ -361,7 +361,7 @@ def main(): # make sure filepath is a path to an actual file print("""\n\n This script helps to add vSphere networks to phpIPAM for IP address management. It is expected - that the vSphere networks are configured as portgroups on distributed virtual switches and + that the vSphere networks are configured as portgroups on distributed virtual switches and named like '[Description] [Subnet IP]{/[mask]}' (ex: 'LAB-Servers 192.168.1.0'). The following PowerCLI command can be used to export the networks from vSphere: @@ -377,7 +377,7 @@ def main(): else: print(f'[ERROR] Unable to find file at {filepath.name}.') continue - + # get collection of networks to import networks = import_networks(filepath) networkNames = get_sorted_list_of_unique_values('name', networks) @@ -415,7 +415,7 @@ def main(): else: del test break - + username = validate_input_is_not_empty('Username', f'Username with read/write access to {hostname}') password = getpass.getpass(f'Password for {username}:\n') apiAppId = validate_input_is_not_empty('App ID', f'App ID for API key (from https://{hostname}/administration/api/)') @@ -452,7 +452,7 @@ def main(): vlan_sets = get_vlan_sets(uri, token, vlans) if remote_agent: agent_sets = get_agent_sets(uri, token, regions) - + # create the networks for network in networks: network['region'] = regions[network['vcenter']]['name'] @@ -462,7 +462,7 @@ def main(): if network['vlan'] == 0: network['vlanId'] = None else: - network['vlanId'] = get_id_from_sets(network['vlan'], vlan_sets) + network['vlanId'] = get_id_from_sets(network['vlan'], vlan_sets) if remote_agent: network['agentId'] = get_id_from_sets(network['region'], agent_sets) else: @@ -478,7 +478,7 @@ if __name__ == "__main__": ``` I'll run it and provide the path to the network export CSV file: -```bash +```command python3 phpipam-bulk-import.py ~/networks.csv ``` @@ -570,7 +570,7 @@ So now phpIPAM knows about the vSphere networks I care about, and it can keep tr ... but I haven't actually *deployed* an agent yet. I'll do that by following the same basic steps [described here](/tanzu-community-edition-k8s-homelab/#phpipam-agent) to spin up my `phpipam-agent` on Kubernetes, and I'll plug in that automagically-generated code for the `IPAM_AGENT_KEY` environment variable: -```yaml +```yaml {linenos=true} --- apiVersion: apps/v1 kind: Deployment diff --git a/content/posts/cat-file-without-comments/index.md b/content/posts/cat-file-without-comments/index.md index 97e29dd..95f3a46 100644 --- a/content/posts/cat-file-without-comments/index.md +++ b/content/posts/cat-file-without-comments/index.md @@ -24,21 +24,23 @@ comment: true # Disable comment if false. It's super handy when a Linux config file is loaded with comments to tell you precisely how to configure the thing, but all those comments can really get in the way when you're trying to review the current configuration. Next time, instead of scrolling through page after page of lengthy embedded explanations, just use: -```shell +```command egrep -v "^\s*(#|$)" $filename ``` For added usefulness, I alias this command to `ccat` (which my brain interprets as "commentless cat") in [my `~/.zshrc`](https://github.com/jbowdre/dotfiles/blob/main/zsh/.zshrc): -```shell +```command alias ccat='egrep -v "^\s*(#|$)"' ``` Now instead of viewing all 75 lines of a [mostly-default Vagrantfile](/create-vms-chromebook-hashicorp-vagrant), I just see the 7 that matter: -```shell -; wc -l Vagrantfile +```command-session +wc -l Vagrantfile 75 Vagrantfile +``` -; ccat Vagrantfile +```command-session +ccat Vagrantfile Vagrant.configure("2") do |config| config.vm.box = "oopsme/windows11-22h2" config.vm.provider :libvirt do |libvirt| @@ -46,8 +48,10 @@ Vagrant.configure("2") do |config| libvirt.memory = 4096 end end +``` -; ccat Vagrantfile | wc -l +```command-session +ccat Vagrantfile | wc -l 7 ``` diff --git a/content/posts/cloud-based-wireguard-vpn-remote-homelab-access/index.md b/content/posts/cloud-based-wireguard-vpn-remote-homelab-access/index.md index f309a77..f5858fb 100644 --- a/content/posts/cloud-based-wireguard-vpn-remote-homelab-access/index.md +++ b/content/posts/cloud-based-wireguard-vpn-remote-homelab-access/index.md @@ -67,7 +67,7 @@ Anyway, after switching to the cheaper Standard tier I can click on the **Extern ##### Security Configuration The **Security** section lets me go ahead and upload an SSH public key that I can then use for logging into the instance once it's running. Of course, that means I'll first need to generate a key pair for this purpose: -```sh +```command ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_wireguard ``` @@ -90,24 +90,24 @@ I'll click **Create** and move on. #### WireGuard Server Setup Once the **Compute Engine > Instances** [page](https://console.cloud.google.com/compute/instances) indicates that the instance is ready, I can make a note of the listed public IP and then log in via SSH: -```sh +```command ssh -i ~/.ssh/id_25519_wireguard {PUBLIC_IP} ``` ##### Preparation And, as always, I'll first make sure the OS is fully updated before doing anything else: -```sh +```command sudo apt update sudo apt upgrade ``` Then I'll install `ufw` to easily manage the host firewall, `qrencode` to make it easier to generate configs for mobile clients, `openresolv` to avoid [this issue](https://superuser.com/questions/1500691/usr-bin-wg-quick-line-31-resolvconf-command-not-found-wireguard-debian/1500896), and `wireguard` to, um, guard the wires: -```sh +```command sudo apt install ufw qrencode openresolv wireguard ``` Configuring the host firewall with `ufw` is very straight forward: -```sh +```shell # First, SSH: sudo ufw allow 22/tcp # and WireGuard: @@ -117,34 +117,36 @@ sudo ufw enable ``` The last preparatory step is to enable packet forwarding in the kernel so that the instance will be able to route traffic between the remote clients and my home network (once I get to that point). I can configure that on-the-fly with: -```sh +```command sudo sysctl -w net.ipv4.ip_forward=1 ``` To make it permanent, I'll edit `/etc/sysctl.conf` and uncomment the same line: -```sh -$ sudo vi /etc/sysctl.conf +```command +sudo vi /etc/sysctl.conf +``` +```cfg # Uncomment the next line to enable packet forwarding for IPv4 net.ipv4.ip_forward=1 ``` ##### WireGuard Interface Config I'll switch to the root user, move into the `/etc/wireguard` directory, and issue `umask 077` so that the files I'm about to create will have a very limited permission set (to be accessible by root, and _only_ root): -```sh +```command sudo -i cd /etc/wireguard umask 077 ``` Then I can use the `wg genkey` command to generate the server's private key, save it to a file called `server.key`, pass it through `wg pubkey` to generate the corresponding public key, and save that to `server.pub`: -```sh +```command wg genkey | tee server.key | wg pubkey > server.pub ``` As I mentioned earlier, WireGuard will create a virtual network interface using an internal network to pass traffic between the WireGuard peers. By convention, that interface is `wg0` and it draws its configuration from a file in `/etc/wireguard` named `wg0.conf`. I could create a configuration file with a different name and thus wind up with a different interface name as well, but I'll stick with tradition to keep things easy to follow. The format of the interface configuration file will need to look something like this: -``` +```cfg [Interface] # this section defines the local WireGuard interface Address = # CIDR-format IP address of the virtual WireGuard interface ListenPort = # WireGuard listens on this port for incoming traffic (randomized if not specified) @@ -162,7 +164,7 @@ AllowedIPs = # which IPs will be routed to this peer There will be a single `[Interface]` section in each peer's configuration file, but they may include multiple `[Peer]` sections. For my config, I'll use the `10.200.200.0/24` network for WireGuard, and let this server be `10.200.200.1`, the VyOS router in my home lab `10.200.200.2`, and I'll assign IPs to the other peers from there. I found a note that Google Cloud uses an MTU size of `1460` bytes so that's what I'll set on this end. I'm going to configure WireGuard to use the VyOS router as the DNS server, and I'll specify my internal `lab.bowdre.net` search domain. Finally, I'll leverage the `PostUp` and `PostDown` directives to enable and disable NAT so that the server will be able to forward traffic between networks for me. So here's the start of my GCP WireGuard server's `/etc/wireguard/wg0.conf`: -```sh +```cfg # /etc/wireguard/wg0.conf [Interface] Address = 10.200.200.1/24 @@ -175,20 +177,23 @@ PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING ``` I don't have any other peers ready to add to this config yet, but I can go ahead and bring up the interface all the same. I'm going to use the `wg-quick` wrapper instead of calling `wg` directly since it simplifies a bit of the configuration, but first I'll need to enable the `wg-quick@{INTERFACE}` service so that it will run automatically at startup: -```sh +```command systemctl enable wg-quick@wg0 systemctl start wg-quick@wg0 ``` I can now bring up the interface with `wg-quick up wg0` and check the status with `wg show`: -``` -root@wireguard:~# wg-quick up wg0 +```commandroot-session +wg-quick up wg0 [#] ip link add wg0 type wireguard [#] wg setconf wg0 /dev/fd/63 [#] ip -4 address add 10.200.200.1/24 dev wg0 [#] ip link set mtu 1460 up dev wg0 [#] resolvconf -a wg0 -m 0 -x [#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o ens4 -j MASQUERADE; ip6tables -A FORWARD -i wg0 -j ACCEPT; ip6tables -t nat -A POSTROUTING -o ens4 -j MASQUERADE +``` + +```commandroot-session root@wireguard:~# wg show interface: wg0 public key: {GCP_PUBLIC_IP} @@ -200,13 +205,13 @@ I'll come back here once I've got a peer config to add. ### Configure VyoS Router as WireGuard Peer Comparatively, configuring WireGuard on VyOS is a bit more direct. I'll start by entering configuration mode and generating and binding a key pair for this interface: -```sh +```commandroot configure run generate pki wireguard key-pair install interface wg0 ``` And then I'll configure the rest of the options needed for the interface: -```sh +```commandroot set interfaces wireguard wg0 address '10.200.200.2/24' set interfaces wireguard wg0 description 'VPN to GCP' set interfaces wireguard wg0 peer wireguard-gcp address '{GCP_PUBLIC_IP}' @@ -219,25 +224,25 @@ set interfaces wireguard wg0 peer wireguard-gcp public-key '{GCP_PUBLIC_KEY}' Note that this time I'm allowing all IPs (`0.0.0.0/0`) so that this WireGuard interface will pass traffic intended for any destination (whether it's local, remote, or on the Internet). And I'm specifying a [25-second `persistent-keepalive` interval](https://www.wireguard.com/quickstart/#nat-and-firewall-traversal-persistence) to help ensure that this NAT-ed tunnel stays up even when it's not actively passing traffic - after all, I'll need the GCP-hosted peer to be able to initiate the connection so I can access the home network remotely. While I'm at it, I'll also add a static route to ensure traffic for the WireGuard tunnel finds the right interface: -```sh +```commandroot set protocols static route 10.200.200.0/24 interface wg0 ``` And I'll add the new `wg0` interface as a listening address for the VyOS DNS forwarder: -```sh +```commandroot set service dns forwarding listen-address '10.200.200.2' ``` I can use the `compare` command to verify the changes I've made, and then apply and save the updated config: -```sh +```commandroot compare commit save ``` I can check the status of WireGuard on VyOS (and view the public key!) like so: -```sh -$ show interfaces wireguard wg0 summary +```commandroot-session +show interfaces wireguard wg0 summary interface: wg0 public key: {VYOS_PUBLIC_KEY} private key: (hidden) @@ -253,7 +258,7 @@ peer: {GCP_PUBLIC_KEY} See? That part was much easier to set up! But it doesn't look like it's actually passing traffic yet... because while the VyOS peer has been configured with the GCP peer's public key, the GCP peer doesn't know anything about the VyOS peer yet. So I'll copy `{VYOS_PUBLIC_KEY}` and SSH back to the GCP instance to finish that configuration. Once I'm there, I can edit `/etc/wireguard/wg0.conf` as root and add in a new `[Peer]` section at the bottom, like this: -``` +```cfg [Peer] # VyOS PublicKey = {VYOS_PUBLIC_KEY} @@ -263,7 +268,7 @@ AllowedIPs = 10.200.200.2/32, 192.168.1.0/24, 172.16.0.0/16 This time, I'm telling WireGuard that the new peer has IP `10.200.200.2` but that it should also get traffic destined for the `192.168.1.0/24` and `172.16.0.0/16` networks, my home and lab networks. Again, the `AllowedIPs` parameter is used for WireGuard's Cryptokey Routing so that it can keep track of which traffic goes to which peers (and which key to use for encryption). After saving the file, I can either restart WireGuard by bringing the interface down and back up (`wg-quick down wg0 && wg-quick up wg0`), or I can reload it on the fly with: -```sh +```command sudo -i wg syncconf wg0 <(wg-quick strip wg0) ``` @@ -271,8 +276,8 @@ wg syncconf wg0 <(wg-quick strip wg0) (I can't just use `wg syncconf wg0` directly since `/etc/wireguard/wg0.conf` includes the `PostUp`/`PostDown` commands which can only be parsed by the `wg-quick` wrapper, so I'm using `wg-quick strip {INTERFACE}` to grab the contents of the config file, remove the problematic bits, and then pass what's left to the `wg syncconf {INTERFACE}` command to update the current running config.) Now I can check the status of WireGuard on the GCP end: -```sh -root@wireguard:~# wg show +```commandroot-session +wg show interface: wg0 public key: {GCP_PUBLIC_KEY} private key: (hidden) @@ -286,16 +291,18 @@ peer: {VYOS_PUBLIC_KEY} ``` Hey, we're passing traffic now! And I can verify that I can ping stuff on my home and lab networks from the GCP instance: -```sh -john@wireguard:~$ ping -c 1 192.168.1.5 +```command-session +ping -c 1 192.168.1.5 PING 192.168.1.5 (192.168.1.5) 56(84) bytes of data. 64 bytes from 192.168.1.5: icmp_seq=1 ttl=127 time=35.6 ms --- 192.168.1.5 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 35.598/35.598/35.598/0.000 ms +``` -john@wireguard:~$ ping -c 1 172.16.10.1 +```command-session +ping -c 1 172.16.10.1 PING 172.16.10.1 (172.16.10.1) 56(84) bytes of data. 64 bytes from 172.16.10.1: icmp_seq=1 ttl=64 time=35.3 ms @@ -340,14 +347,17 @@ I _shouldn't_ need the keepalive for the "Road Warrior" peers connecting to the Now I can go ahead and save this configuration, but before I try (and fail) to connect I first need to tell the cloud-hosted peer about the Chromebook. So I fire up an SSH session to my GCP instance, become root, and edit the WireGuard configuration to add a new `[Peer]` section. -```sh +```command sudo -i +``` + +```commandroot vi /etc/wireguard/wg0.conf ``` Here's the new section that I'll add to the bottom of the config: -```sh +```cfg [Peer] # Chromebook PublicKey = {CB_PUBLIC_KEY} @@ -357,7 +367,7 @@ AllowedIPs = 10.200.200.3/32 This one is acting as a single-node endpoint (rather than an entryway into other networks like the VyOS peer) so setting `AllowedIPs` to only the peer's IP makes sure that WireGuard will only send it traffic specifically intended for this peer. So my complete `/etc/wireguard/wg0.conf` looks like this so far: -```sh +```cfg # /etc/wireguard/wg0.conf [Interface] Address = 10.200.200.1/24 @@ -380,14 +390,14 @@ AllowedIPs = 10.200.200.3/32 ``` Now to save the file and reload the WireGuard configuration again: -```sh +```commandroot wg syncconf wg0 <(wg-quick strip wg0) ``` At this point I can activate the connection in the WireGuard Android app, wait a few seconds, and check with `wg show` to confirm that the tunnel has been established successfully: -```sh -root@wireguard:~# wg show +```commandroot-session +wg show interface: wg0 public key: {GCP_PUBLIC_KEY} private key: (hidden) @@ -413,20 +423,23 @@ And I can even access my homelab when not at home! Being able to copy-and-paste the required public keys between the WireGuard app and the SSH session to the GCP instance made it relatively easy to set up the Chromebook, but things could be a bit trickier on a phone without that kind of access. So instead I will create the phone's configuration on the WireGuard server in the cloud, render that config file as a QR code, and simply scan that through the phone's WireGuard app to import the settings. I'll start by SSHing to the GCP instance, elevating to root, setting the restrictive `umask` again, and creating a new folder to store client configurations. -```sh +```command sudo -i +``` + +```commandroot umask 077 mkdir /etc/wireguard/clients cd /etc/wireguard/clients ``` As before, I'll use the built-in `wg` commands to generate the private and public key pair: -```sh +```command wg genkey | tee phone1.key | wg pubkey > phone1.pub ``` I can then use those keys to assemble the config for the phone: -```sh +```cfg # /etc/wireguard/clients/phone1.conf [Interface] PrivateKey = {PHONE1_PRIVATE_KEY} @@ -440,19 +453,19 @@ Endpoint = {GCP_PUBLIC_IP}:51820 ``` I'll also add the interface address and corresponding public key to a new `[Peer]` section of `/etc/wireguard/wg0.conf`: -```sh +```cfg [Peer] PublicKey = {PHONE1_PUBLIC_KEY} AllowedIPs = 10.200.200.4/32 ``` And reload the WireGuard config: -```sh +```commandroot wg syncconf wg0 <(wg-quick strip wg0) ``` Back in the `clients/` directory, I can use `qrencode` to render the phone configuration file (keys and all!) as a QR code: -```sh +```commandroot qrencode -t ansiutf8 < phone1.conf ``` ![QR code config](20211028_qrcode_config.png) @@ -465,7 +478,7 @@ I can even access my vSphere lab environment - not that it offers a great mobile Before moving on too much further, though, I'm going to clean up the keys and client config file that I generated on the GCP instance. It's not great hygiene to keep a private key stored on the same system it's used to access. -```sh +```commandroot rm -f /etc/wireguard/clients/* ``` diff --git a/content/posts/create-vms-chromebook-hashicorp-vagrant/index.md b/content/posts/create-vms-chromebook-hashicorp-vagrant/index.md index ebec272..859b739 100644 --- a/content/posts/create-vms-chromebook-hashicorp-vagrant/index.md +++ b/content/posts/create-vms-chromebook-hashicorp-vagrant/index.md @@ -31,53 +31,52 @@ It took a bit of fumbling, but this article describes what it took to get a Vagr ### Install the prerequisites There are are a few packages which need to be installed before we can move on to the Vagrant-specific stuff. It's quite possible that these are already on your system.... but if they *aren't* already present you'll have a bad problem[^problem]. -```shell -sudo apt update -sudo apt install \ - build-essential \ - gpg \ - lsb-release \ - wget +```command-session +sudo apt update && sudo apt install \ + build-essential \ + gpg \ + lsb-release \ + wget ``` [^problem]: and [will not go to space today](https://xkcd.com/1133/). I'll be configuring Vagrant to use [`libvirt`](https://libvirt.org/) to interface with the [Kernel Virtual Machine (KVM)](https://www.linux-kvm.org/page/Main_Page) virtualization solution (rather than something like VirtualBox that would bring more overhead) so I'll need to install some packages for that as well: -```shell +```command sudo apt install virt-manager libvirt-dev ``` And to avoid having to `sudo` each time I interact with `libvirt` I'll add myself to that group: -```shell +```command sudo gpasswd -a $USER libvirt ; newgrp libvirt ``` And to avoid [this issue](https://github.com/virt-manager/virt-manager/issues/333) I'll make a tweak to the `qemu.conf` file: -```shell +```command echo "remember_owner = 0" | sudo tee -a /etc/libvirt/qemu.conf sudo systemctl restart libvirtd ``` I'm also going to use `rsync` to share a [synced folder](https://developer.hashicorp.com/vagrant/docs/synced-folders/basic_usage) between the host and the VM guest so I'll need to make sure that's installed too: -```shell +```command sudo apt install rsync ``` ### Install Vagrant With that out of the way, I'm ready to move on to the business of installing Vagrant. I'll start by adding the HashiCorp repository: -```shell +```command wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list ``` I'll then install the Vagrant package: -```shell +```command sudo apt update sudo apt install vagrant ``` I also need to install the [`vagrant-libvirt` plugin](https://github.com/vagrant-libvirt/vagrant-libvirt) so that Vagrant will know how to interact with `libvirt`: -```shell +```command vagrant plugin install vagrant-libvirt ``` @@ -87,13 +86,13 @@ Now I can get to the business of creating my first VM with Vagrant! Vagrant VMs are distributed as Boxes, and I can browse some published Boxes at [app.vagrantup.com/boxes/search?provider=libvirt](https://app.vagrantup.com/boxes/search?provider=libvirt) (applying the `provider=libvirt` filter so that I only see Boxes which will run on my chosen virtualization provider). For my first VM, I'll go with something light and simple: [`generic/alpine38`](https://app.vagrantup.com/generic/boxes/alpine38). So I'll create a new folder to contain the Vagrant configuration: -```shell +```command mkdir vagrant-alpine cd vagrant-alpine ``` And since I'm referencing a Vagrant Box which is published on Vagrant Cloud, downloading the config is as simple as: -```shell +```command vagrant init generic/alpine38 ``` @@ -106,7 +105,7 @@ the comments in the Vagrantfile as well as documentation on ``` Before I `vagrant up` the joint, I do need to make a quick tweak to the default Vagrantfile, which is what tells Vagrant how to configure the VM. By default, Vagrant will try to create a synced folder using NFS and will throw a nasty error when that (inevitably[^inevitable]) fails. So I'll open up the Vagrantfile to review and edit it: -```shell +```command vim Vagrantfile ``` @@ -135,8 +134,8 @@ end ``` With that, I'm ready to fire up this VM with `vagrant up`! Vagrant will look inside `Vagrantfile` to see the config, pull down the `generic/alpine38` Box from Vagrant Cloud, boot the VM, configure it so I can SSH in to it, and mount the synced folder: -```shell -; vagrant up +```command-session +vagrant up Bringing machine 'default' up with 'libvirt' provider... ==> default: Box 'generic/alpine38' could not be found. Attempting to find and install... default: Box Provider: libvirt @@ -161,8 +160,8 @@ Bringing machine 'default' up with 'libvirt' provider... ``` And then I can use `vagrant ssh` to log in to the new VM: -```shell -; vagrant ssh +```command-session +vagrant ssh alpine38:~$ cat /etc/os-release NAME="Alpine Linux" ID=alpine @@ -173,19 +172,19 @@ BUG_REPORT_URL="http://bugs.alpinelinux.org" ``` I can also verify that the synced folder came through as expected: -```shell -alpine38:~$ ls -l /vagrant +```command-session +ls -l /vagrant total 4 -rw-r--r-- 1 vagrant vagrant 3117 Feb 20 15:51 Vagrantfile ``` Once I'm finished poking at this VM, shutting it down is as easy as: -```shell +```command vagrant halt ``` And if I want to clean up and remove all traces of the VM, that's just: -```shell +```command vagrant destroy ``` @@ -201,7 +200,7 @@ Windows 11 makes for a pretty hefty VM which will require significant storage sp {{% /notice %}} Again, I'll create a new folder to hold the Vagrant configuration and do a `vagrant init`: -```shell +```command mkdir vagrant-win11 cd vagrant-win11 vagrant init oopsme/windows11-22h2 @@ -221,22 +220,22 @@ end [^ram]: Note here that `libvirt.memory` is specified in MB. Windows 11 boots happily with 4096 MB of RAM.... and somewhat less so with just 4 MB. *Ask me how I know...* Now it's time to bring it up. This one's going to take A While as it syncs the ~12GB Box first. -```shell +```command vagrant up ``` Eventually it should spit out that lovely **Machine booted and ready!** message and I can log in! I *can* do a `vagrant ssh` again to gain a shell in the Windows environment, but I'll probably want to interact with those sweet sweet graphics. That takes a little bit more effort. First, I'll use `virsh -c qemu:///system list` to see the running VM(s): -```shell -; virsh -c qemu:///system list +```command-session +virsh -c qemu:///system list Id Name State --------------------------------------- 10 vagrant-win11_default running ``` Then I can tell `virt-viewer` that I'd like to attach a session there: -```shell +```command virt-viewer -c qemu:///system -a vagrant-win11_default ``` diff --git a/content/posts/creating-static-records-in-microsoft-dns-from-vrealize-automation/index.md b/content/posts/creating-static-records-in-microsoft-dns-from-vrealize-automation/index.md index 21a0075..aeabb51 100644 --- a/content/posts/creating-static-records-in-microsoft-dns-from-vrealize-automation/index.md +++ b/content/posts/creating-static-records-in-microsoft-dns-from-vrealize-automation/index.md @@ -27,13 +27,13 @@ Add-WindowsCapability -online -name Rsat.Dns.Tools~~~~0.0.1.0 ``` Instead of using a third-party SSH server, I'll use the OpenSSH Server that's already available in Windows 10 (1809+) and Server 2019: -```powershell +```powershell # Install OpenSSH Server Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 ``` I'll also want to set it so that the default shell upon SSH login is PowerShell (rather than the standard Command Prompt) so that I can have easy access to those DNS cmdlets: -```powershell +```powershell # Set PowerShell as the default Shell (for access to DNS cmdlets) New-ItemProperty -Path "HKLM:\SOFTWARE\OpenSSH" -Name DefaultShell -Value "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -PropertyType String -Force ``` @@ -45,13 +45,13 @@ Add-LocalGroupMember -Group Administrators -Member "lab\vra" ``` And I'll modify the OpenSSH configuration so that only members of that Administrators group are permitted to log into the server via SSH: -```powershell +```powershell # Restrict SSH access to members in the local Administrators group (Get-Content "C:\ProgramData\ssh\sshd_config") -Replace "# Authentication:", "$&`nAllowGroups Administrators" | Set-Content "C:\ProgramData\ssh\sshd_config" ``` Finally, I'll start the `sshd` service and set it to start up automatically: -```powershell +```powershell # Start service and set it to automatic Set-Service -Name sshd -StartupType Automatic -Status Running ``` @@ -59,13 +59,13 @@ Set-Service -Name sshd -StartupType Automatic -Status Running #### A quick test At this point, I can log in to the server via SSH and confirm that I can create and delete records in my DNS zone: ```powershell -$ ssh vra@win02.lab.bowdre.net -vra@win02.lab.bowdre.net's password: +ssh vra@win02.lab.bowdre.net +vra@win02.lab.bowdre.net's password: Windows PowerShell Copyright (C) Microsoft Corporation. All rights reserved. -PS C:\Users\vra> Add-DnsServerResourceRecordA -ComputerName win01.lab.bowdre.net -Name testy -ZoneName lab.bowdre.net -AllowUpdateAny -IPv4Address 172.16.99.99 +PS C:\Users\vra> Add-DnsServerResourceRecordA -ComputerName win01.lab.bowdre.net -Name testy -ZoneName lab.bowdre.net -AllowUpdateAny -IPv4Address 172.16.99.99 PS C:\Users\vra> nslookup testy Server: win01.lab.bowdre.net @@ -111,7 +111,7 @@ resources: ``` So here's the complete cloud template that I've been working on: -```yaml +```yaml {linenos=true} formatVersion: 1 inputs: site: @@ -245,7 +245,7 @@ That should take care of the front-end changes. Now for the back-end stuff: I ne ### The vRO solution -I will be adding the DNS action on to my existing "VM Post-Provisioning" workflow (described [here](/adding-vm-notes-and-custom-attributes-with-vra8), which gets triggered after the VM has been successfully deployed. +I will be adding the DNS action on to my existing "VM Post-Provisioning" workflow (described [here](/adding-vm-notes-and-custom-attributes-with-vra8), which gets triggered after the VM has been successfully deployed. #### Configuration Element But first, I'm going to go to the **Assets > Configurations** section of the Orchestrator UI and create a new Configuration Element to store variables related to the SSH host and DNS configuration. @@ -258,7 +258,7 @@ And then I create the following variables: | Variable | Value | Type | | --- | --- | --- | -| `sshHost` | `win02.lab.bowdre.net` | string | +| `sshHost` | `win02.lab.bowdre.net` | string | | `sshUser` | `vra` | string | | `sshPass` | `*****` | secureString | | `dnsServer` | `[win01.lab.bowdre.net]` | Array/string | @@ -280,7 +280,7 @@ Now we're ready for the good part: inserting a new scriptable task into the work ![Task inputs](20210809_task_inputs.png) And here's the JavaScript for the task: -```js +```js {linenos=true} // JavaScript: Create DNS Record task // Inputs: inputProperties (Properties), dnsServers (Array/string), sshHost (string), sshUser (string), sshPass (secureString), supportedDomains (Array/string) // Outputs: None @@ -312,7 +312,7 @@ if (staticDns == "true" && supportedDomains.indexOf(dnsDomain) >= 0) { System.log("Successfully created DNS record!") // make a note that it was successful so we don't repeat this unnecessarily created = true; - } + } } } sshSession.disconnect() @@ -341,7 +341,7 @@ The schema will include a single scriptable task: And it's going to be *pretty damn similar* to the other one: -```js +```js {linenos=true} // JavaScript: Delete DNS Record task // Inputs: inputProperties (Properties), dnsServers (Array/string), sshHost (string), sshUser (string), sshPass (secureString), supportedDomains (Array/string) // Outputs: None @@ -373,7 +373,7 @@ if (staticDns == "true" && supportedDomains.indexOf(dnsDomain) >= 0) { System.log("Successfully deleted DNS record!") // make a note that it was successful so we don't repeat this unnecessarily deleted = true; - } + } } } sshSession.disconnect() @@ -396,9 +396,9 @@ Once the deployment completes, I go back into vRO, find the most recent item in ![Workflow success!](20210813_workflow_success.png) And I can run a quick query to make sure that name actually resolves: -```shell -❯ dig +short bow-ttst-xxx023.lab.bowdre.net A -172.16.30.10 +```command-session +dig +short bow-ttst-xxx023.lab.bowdre.net A +172.16.30.10 ``` It works! @@ -410,9 +410,9 @@ Again, I'll check the **Workflow Runs** in vRO to see that the deprovisioning ta ![VM Deprovisioning workflow](20210813_workflow_deletion.png) And I can `dig` a little more to make sure the name doesn't resolve anymore: -```shell -❯ dig +short bow-ttst-xxx023.lab.bowdre.net A - +```command-session +dig +short bow-ttst-xxx023.lab.bowdre.net A + ``` It *really* works! diff --git a/content/posts/easy-push-notifications-with-ntfy/index.md b/content/posts/easy-push-notifications-with-ntfy/index.md index a5447b8..9905a3b 100644 --- a/content/posts/easy-push-notifications-with-ntfy/index.md +++ b/content/posts/easy-push-notifications-with-ntfy/index.md @@ -42,13 +42,13 @@ I'm going to use the [Docker setup](https://docs.ntfy.sh/install/#docker) on a s #### Ntfy in Docker So I'll start by creating a new directory at `/opt/ntfy/` to hold the goods, and create a compose config. -```shell -$ sudo mkdir -p /opt/ntfy -$ sudo vim /opt/ntfy/docker-compose.yml +```command +sudo mkdir -p /opt/ntfy +sudo vim /opt/ntfy/docker-compose.yml ``` -`/opt/ntfy/docker-compose.yml`: -```yaml +```yaml {linenos=true} +# /opt/ntfy/docker-compose.yml version: "2.3" services: @@ -78,8 +78,8 @@ This config will create/mount folders in the working directory to store the ntfy I can go ahead and bring it up: -```shell -$ sudo docker-compose up -d +```command-session +sudo docker-compose up -d Creating network "ntfy_default" with the default driver Pulling ntfy (binwiederhier/ntfy:)... latest: Pulling from binwiederhier/ntfy @@ -92,8 +92,8 @@ Creating ntfy ... done #### Caddy Reverse Proxy I'll also want to add [the following](https://docs.ntfy.sh/config/#nginxapache2caddy) to my Caddy config: -`/etc/caddy/Caddyfile`: -``` +```caddyfile {linenos=true} +# /etc/caddy/Caddyfile ntfy.runtimeterror.dev, http://ntfy.runtimeterror.dev { reverse_proxy localhost:2586 @@ -109,8 +109,8 @@ ntfy.runtimeterror.dev, http://ntfy.runtimeterror.dev { ``` And I'll restart Caddy to apply the config: -```shell -$ sudo systemctl restart caddy +```command +sudo systemctl restart caddy ``` Now I can point my browser to `https://ntfy.runtimeterror.dev` and see the web interface: @@ -121,8 +121,8 @@ I can subscribe to a new topic: ![Subscribing to a public topic](subscribe_public_topic.png) And publish a message to it: -```shell -$ curl -d "Hi" https://ntfy.runtimeterror.dev/testy +```command-session +curl -d "Hi" https://ntfy.runtimeterror.dev/testy {"id":"80bUl6cKwgBP","time":1694981305,"expires":1695024505,"event":"message","topic":"testy","message":"Hi"} ``` @@ -134,16 +134,16 @@ Which will then show up as a notification in my browser: So now I've got my own ntfy server, and I've verified that it works for unauthenticated notifications. I don't really want to operate *anything* on the internet without requiring authentication, though, so I'm going to configure ntfy to prevent unauthenticated reads and writes. I'll start by creating a `server.yml` config file which will be mounted into the container. This config will specify where to store the user database and switch the default ACL to `deny-all`: -`/opt/ntfy/etc/ntfy/server.yml`: ```yaml +# /opt/ntfy/etc/ntfy/server.yml auth-file: "/var/lib/ntfy/user.db" auth-default-access: "deny-all" base-url: "https://ntfy.runtimeterror.dev" ``` I can then restart the container, and try again to subscribe to the same (or any other topic): -```shell -$ sudo docker-compose down && sudo docker-compose up -d +```command +sudo docker-compose down && sudo docker-compose up -d ``` @@ -151,31 +151,35 @@ Now I get prompted to log in: ![Login prompt](login_required.png) I'll need to use the ntfy CLI to create/manage entries in the user DB, and that means first grabbing a shell inside the container: -```shell -$ sudo docker exec -it ntfy /bin/sh +```command +sudo docker exec -it ntfy /bin/sh ``` For now, I'm going to create three users: one as an administrator, one as a "writer", and one as a "reader". I'll be prompted for a password for each: -```shell -$ ntfy user add --role=admin administrator +```command-session +ntfy user add --role=admin administrator user administrator added with role admin -$ ntfy user add writer +``` +```command-session +ntfy user add writer user writer added with role user -$ ntfy user add reader +``` +```command-session +ntfy user add reader user reader added with role user ``` The admin user has global read+write access, but right now the other two can't do anything. Let's make it so that `writer` can write to all topics, and `reader` can read from all topics: -```shell -$ ntfy access writer '*' write -$ ntfy access reader '*' read +```command +ntfy access writer '*' write +ntfy access reader '*' read ``` I could lock these down further by selecting specific topic names instead of `'*'` but this will do fine for now. Let's go ahead and verify the access as well: -```shell -$ ntfy access +```command-session +ntfy access user administrator (role: admin, tier: none) - read-write access to all topics (admin role) user reader (role: user, tier: none) @@ -188,16 +192,16 @@ user * (role: anonymous, tier: none) ``` While I'm at it, I also want to configure an access token to be used with the `writer` account. I'll be able to use that instead of username+password when publishing messages. -```shell -$ ntfy token add writer +```command-session +ntfy token add writer token tk_mm8o6cwxmox11wrnh8miehtivxk7m created for user writer, never expires ``` I can go back to the web, subscribe to the `testy` topic again using the `reader` credentials, and then test sending an authenticated notification with `curl`: -```shell -$ curl -H "Authorization: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m" \ - -d "Once more, with auth!" \ - https://ntfy.runtimeterror.dev/testy +```command-session +curl -H "Authorization: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m" \ + -d "Once more, with auth!" \ + https://ntfy.runtimeterror.dev/testy {"id":"0dmX9emtehHe","time":1694987274,"expires":1695030474,"event":"message","topic":"testy","message":"Once more, with auth!"} ``` @@ -227,9 +231,9 @@ curl \ Note that I'm using a new topic name now: `server_alerts`. Topics are automatically created when messages are posted to them. I just need to make sure to subscribe to the topic in the web UI (or mobile app) so that I can receive these notifications. Okay, now let's make it executable and then give it a quick test: -```shell -$ chmod +x /usr/local/bin/ntfy_push.sh -$ /usr/local/bin/ntfy_push.sh "Script Test" "This is a test from the magic script I just wrote." +```command +chmod +x /usr/local/bin/ntfy_push.sh +/usr/local/bin/ntfy_push.sh "Script Test" "This is a test from the magic script I just wrote." ``` ![Script test](script_test.png) @@ -248,14 +252,14 @@ MESSAGE="System boot complete" ``` And this one should be executable as well: -```shell -$ chmod +x /usr/local/bin/ntfy_boot_complete.sh +```command +chmod +x /usr/local/bin/ntfy_boot_complete.sh ``` ##### Service Definition Finally I can create and register the service definition so that the script will run at each system boot. `/etc/systemd/system/ntfy_boot_complete.service`: -``` +```cfg [Unit] After=network.target @@ -266,7 +270,7 @@ ExecStart=/usr/local/bin/ntfy_boot_complete.sh WantedBy=default.target ``` -```shell +```command sudo systemctl daemon-reload sudo systemctl enable --now ntfy_boot_complete.service ``` @@ -285,8 +289,8 @@ Enabling ntfy as a notification handler is pretty straight-forward, and it will ##### Notify Configuration I'll add ntfy to Home Assistant by using the [RESTful Notifications](https://www.home-assistant.io/integrations/notify.rest/) integration. For that, I just need to update my instance's `configuration.yaml` to configure the connection. -`configuration.yaml`: -```yaml +```yaml {linenos=true} +# configuration.yaml notify: - name: ntfy platform: rest @@ -302,6 +306,7 @@ notify: The `Authorization` line references a secret stored in `secrets.yaml`: ```yaml +# secrets.yaml ntfy_token: Bearer tk_mm8o6cwxmox11wrnh8miehtivxk7m ``` diff --git a/content/posts/enable-tanzu-cli-auto-completion-bash-zsh/index.md b/content/posts/enable-tanzu-cli-auto-completion-bash-zsh/index.md index 1d3e329..5e707d3 100644 --- a/content/posts/enable-tanzu-cli-auto-completion-bash-zsh/index.md +++ b/content/posts/enable-tanzu-cli-auto-completion-bash-zsh/index.md @@ -51,13 +51,13 @@ Running `tanzu completion --help` will tell you what's needed, and you can just ``` So to get the completions to load automatically whenever you start a `bash` shell, run: -```shell +```command tanzu completion bash > $HOME/.tanzu/completion.bash.inc printf "\n# Tanzu shell completion\nsource '$HOME/.tanzu/completion.bash.inc'\n" >> $HOME/.bash_profile ``` For a `zsh` shell, it's: -```shell +```command echo "autoload -U compinit; compinit" >> ~/.zshrc tanzu completion zsh > "${fpath[1]}/_tanzu" ``` diff --git a/content/posts/esxi-arm-on-quartz64/index.md b/content/posts/esxi-arm-on-quartz64/index.md index 50264e6..be133cf 100644 --- a/content/posts/esxi-arm-on-quartz64/index.md +++ b/content/posts/esxi-arm-on-quartz64/index.md @@ -85,7 +85,7 @@ Let's start with the gear (hardware and software) I needed to make this work: The very first task is to write the required firmware image (download [here](https://github.com/jaredmcneill/quartz64_uefi/releases)) to a micro SD card. I used a 64GB card that I had lying around but you could easily get by with a *much* smaller one; the firmware image is tiny, and the card can't be used for storing anything else. Since I'm doing this on a Chromebook, I'll be using the [Chromebook Recovery Utility (CRU)](https://chrome.google.com/webstore/detail/chromebook-recovery-utili/pocpnlppkickgojjlmhdmidojbmbodfm) for writing the images to external storage as described [in another post](/burn-an-iso-to-usb-with-the-chromebook-recovery-utility/). After downloading [`QUARTZ64_EFI.img.gz`](https://github.com/jaredmcneill/quartz64_uefi/releases/download/2022-07-20/QUARTZ64_EFI.img.gz), I need to get it into a format recognized by CRU and, in this case, that means extracting the gzipped archive and then compressing the `.img` file into a standard `.zip`: -``` +```command gunzip QUARTZ64_EFI.img.gz zip QUARTZ64_EFI.img.zip QUARTZ64_EFI.img ``` @@ -98,7 +98,7 @@ I can then write it to the micro SD card by opening CRU, clicking on the gear ic I'll also need to prepare the ESXi installation media (download [here](https://customerconnect.vmware.com/downloads/get-download?downloadGroup=ESXI-ARM)). For that, I'll be using a 256GB USB drive. Due to the limited storage options on the Quartz64, I'll be installing ESXi onto the same drive I use to boot the installer so, in this case, the more storage the better. By default, ESXi 7.0 will consume up to 128GB for the new `ESX-OSData` partition; whatever is leftover will be made available as a VMFS datastore. That could be problematic given the unavailable/flaky USB support of the Quartz64. (While you *can* install ESXi onto a smaller drive, down to about ~20GB, the lack of additional storage on this hardware makes it pretty important to take advantage of as much space as you can.) In any case, to make the downloaded `VMware-VMvisor-Installer-7.0-20133114.aarch64.iso` writeable with CRU all I need to do is add `.bin` to the end of the filename: -``` +```command mv VMware-VMvisor-Installer-7.0-20133114.aarch64.iso{,.bin} ``` @@ -201,12 +201,12 @@ As I mentioned earlier, my initial goal is to deploy a Tailscale node on my new #### Deploying Photon OS VMware provides Photon in a few different formats, as described on the [download page](https://github.com/vmware/photon/wiki/Downloading-Photon-OS). I'm going to use the "OVA with virtual hardware v13 arm64" version so I'll kick off that download of `photon_uefi.ova`. I'm actually going to download that file straight to my `deb01` Linux VM: -```shell +```command wget https://packages.vmware.com/photon/4.0/Rev2/ova/photon_uefi.ova ``` and then spawn a quick Python web server to share it out: -```shell -❯ python3 -m http.server +```command-session +python3 -m http.server Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ... ``` @@ -232,12 +232,12 @@ The default password for Photon's `root` user is `changeme`. You'll be forced to ![First login, and the requisite password change](first_login.png) Now that I'm in, I'll set the hostname appropriately: -```bash +```commandroot hostnamectl set-hostname pho01 ``` For now, the VM pulled an IP from DHCP but I would like to configure that statically instead. To do that, I'll create a new interface file: -```bash +```commandroot-session cat > /etc/systemd/network/10-static-en.network << "EOF" [Match] @@ -251,7 +251,8 @@ DHCP = no IPForward = yes EOF - +``` +```commandroot chmod 644 /etc/systemd/network/10-static-en.network systemctl restart systemd-networkd ``` @@ -259,21 +260,23 @@ systemctl restart systemd-networkd I'm including `IPForward = yes` to [enable IP forwarding](https://tailscale.com/kb/1104/enable-ip-forwarding/) for Tailscale. With networking sorted, it's probably a good idea to check for and apply any available updates: -```bash +```commandroot tdnf update -y ``` I'll also go ahead and create a normal user account (with sudo privileges) for me to use: -```bash +```commandroot useradd -G wheel -m john passwd john ``` Now I can use SSH to connect to the VM and ditch the web console: -```bash -❯ ssh pho01.lab.bowdre.net +```command-session +ssh pho01.lab.bowdre.net Password: -john@pho01 [ ~ ]$ sudo whoami +``` +```command-session +sudo whoami We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things: @@ -292,43 +295,44 @@ Looking good! I'll now move on to the justification[^justification] for this ent #### Installing Tailscale If I *weren't* doing this on hard mode, I could use Tailscale's [install script](https://tailscale.com/download) like I do on every other Linux system. Hard mode is what I do though, and the installer doesn't directly support Photon OS. I'll instead consult the [manual install instructions](https://tailscale.com/download/linux/static) which tell me to download the appropriate binaries from [https://pkgs.tailscale.com/stable/#static](https://pkgs.tailscale.com/stable/#static). So I'll grab the link for the latest `arm64` build and pull the down to the VM: -```bash +```command curl https://pkgs.tailscale.com/stable/tailscale_1.22.2_arm64.tgz --output tailscale_arm64.tgz ``` Then I can unpack it: -```bash +```command sudo tdnf install tar tar xvf tailscale_arm64.tgz cd tailscale_1.22.2_arm64/ ``` So I've got the `tailscale` and `tailscaled` binaries as well as some sample service configs in the `systemd` directory: -```bash -john@pho01 [ ~/tailscale_1.22.2_arm64 ]$ -.: +```command-session +ls total 32288 drwxr-x--- 2 john users 4096 Mar 18 02:44 systemd -rwxr-x--- 1 john users 12187139 Mar 18 02:44 tailscale -rwxr-x--- 1 john users 20866538 Mar 18 02:44 tailscaled - -./systemd: +``` +```command-session +ls ./systemd total 8 -rw-r----- 1 john users 287 Mar 18 02:44 tailscaled.defaults -rw-r----- 1 john users 674 Mar 18 02:44 tailscaled.service ``` Dealing with the binaries is straight-forward. I'll drop them into `/usr/bin/` and `/usr/sbin/` (respectively) and set the file permissions: -```bash +```command sudo install -m 755 tailscale /usr/bin/ sudo install -m 755 tailscaled /usr/sbin/ ``` Then I'll descend to the `systemd` folder and see what's up: -```bash -john@pho01 [ ~/tailscale_1.22.2_arm64/ ]$ cd systemd/ - -john@pho01 [ ~/tailscale_1.22.2_arm64/systemd ]$ cat tailscaled.defaults +```command +cd systemd/ +``` +```command-session +cat tailscaled.defaults # Set the port to listen on for incoming VPN packets. # Remote nodes will automatically be informed about the new port number, # but you might want to configure this in order to set external firewall @@ -337,8 +341,9 @@ PORT="41641" # Extra flags you might want to pass to tailscaled. FLAGS="" - -john@pho01 [ ~/tailscale_1.22.2_arm64/systemd ]$ cat tailscaled.service +``` +```command-session +cat tailscaled.service [Unit] Description=Tailscale node agent Documentation=https://tailscale.com/kb/ @@ -366,23 +371,23 @@ WantedBy=multi-user.target ``` `tailscaled.defaults` contains the default configuration that will be referenced by the service, and `tailscaled.service` tells me that it expects to find it at `/etc/defaults/tailscaled`. So I'll copy it there and set the perms: -```bash +```command sudo install -m 644 tailscaled.defaults /etc/defaults/tailscaled ``` `tailscaled.service` will get dropped in `/usr/lib/systemd/system/`: -```bash +```command sudo install -m 644 tailscaled.service /usr/lib/systemd/system/ ``` Then I'll enable the service and start it: -```bash +```command sudo systemctl enable tailscaled.service sudo systemctl start tailscaled.service ``` And finally log in to Tailscale, including my `tag:home` tag for [ACL purposes](/secure-networking-made-simple-with-tailscale/#acls) and a route advertisement for my home network so that my other Tailscale nodes can use this one to access other devices as well: -```bash +```command sudo tailscale up --advertise-tags "tag:home" --advertise-route "192.168.1.0/24" ``` @@ -408,7 +413,6 @@ Now I can remotely access the VM (and thus my homelab!) from any of my other Tai ### Conclusion I actually received the Quartz64 waay back on March 2nd, and it's taken me until this week to get all the pieces in place and working the way I wanted. -{{< tweet user="johndotbowdre" id="1499194756148125701" >}} As is so often the case, a lot of time and effort would have been saved if I had RTFM'd[^rtfm] before diving in to the deep end. I definitely hadn't anticipated all the limitations that would come with the Quartz64 SBC before ordering mine. Now that it's done, though, I'm pretty pleased with the setup, and I feel like I learned quite a bit along the way. I keep reminding myself that this is still a very new hardware platform. I'm excited to see how things improve with future development efforts. diff --git a/content/posts/federated-matrix-server-synapse-on-oracle-clouds-free-tier/index.md b/content/posts/federated-matrix-server-synapse-on-oracle-clouds-free-tier/index.md index edb8bad..22ffd3e 100644 --- a/content/posts/federated-matrix-server-synapse-on-oracle-clouds-free-tier/index.md +++ b/content/posts/federated-matrix-server-synapse-on-oracle-clouds-free-tier/index.md @@ -74,8 +74,8 @@ Success! My new ingress rules appear at the bottom of the list. ![New rules added](s5Y0rycng.png) That gets traffic from the internet and to my instance, but the OS is still going to drop the traffic at its own firewall. I'll need to work with `iptables` to change that. (You typically use `ufw` to manage firewalls more easily on Ubuntu, but it isn't included on this minimal image and seemed to butt heads with `iptables` when I tried adding it. I eventually decided it was better to just interact with `iptables` directly). I'll start by listing the existing rules on the `INPUT` chain: -``` -$ sudo iptables -L INPUT --line-numbers +```command-session +sudo iptables -L INPUT --line-numbers Chain INPUT (policy ACCEPT) num target prot opt source destination 1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED @@ -87,14 +87,14 @@ num target prot opt source destination ``` Note the `REJECT all` statement at line `6`. I'll need to insert my new `ACCEPT` rules for ports `80` and `443` above that implicit deny all: -``` +```command sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT ``` And then I'll confirm that the order is correct: -``` -$ sudo iptables -L INPUT --line-numbers +```command-session +sudo iptables -L INPUT --line-numbers Chain INPUT (policy ACCEPT) num target prot opt source destination 1 ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED @@ -108,8 +108,8 @@ num target prot opt source destination ``` I can use `nmap` running from my local Linux environment to confirm that I can now reach those ports on the VM. (They're still "closed" since nothing is listening on the ports yet, but the connections aren't being rejected.) -``` -$ nmap -Pn matrix.bowdre.net +```command-session +nmap -Pn matrix.bowdre.net Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 12:49 CDT Nmap scan report for matrix.bowdre.net(150.136.6.180) Host is up (0.086s latency). @@ -126,15 +126,15 @@ Nmap done: 1 IP address (1 host up) scanned in 8.44 seconds Cool! Before I move on, I'll be sure to make the rules persistent so they'll be re-applied whenever `iptables` starts up: Make rules persistent: -``` -$ sudo netfilter-persistent save +```command-session +sudo netfilter-persistent save run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save ``` ### Reverse proxy setup I had initially planned on using `certbot` to generate Let's Encrypt certificates, and then reference the certs as needed from an `nginx` or Apache reverse proxy configuration. While researching how the [proxy would need to be configured to front Synapse](https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md), I found this sample `nginx` configuration: -```conf +```nginx {linenos=true} server { listen 443 ssl http2; listen [::]:443 ssl http2; @@ -159,7 +159,7 @@ server { ``` And this sample Apache one: -```conf +```apache {linenos=true} SSLEngine on ServerName matrix.example.com @@ -185,7 +185,7 @@ And this sample Apache one: ``` I also found this sample config for another web server called [Caddy](https://caddyserver.com): -``` +```caddy {linenos=true} matrix.example.com { reverse_proxy /_matrix/* http://localhost:8008 reverse_proxy /_synapse/client/* http://localhost:8008 @@ -198,7 +198,7 @@ example.com:8448 { One of these looks much simpler than the other two. I'd never heard of Caddy so I did some quick digging, and I found that it would actually [handle the certificates entirely automatically](https://caddyserver.com/docs/automatic-https) - in addition to having a much easier config. [Installing Caddy](https://caddyserver.com/docs/install#debian-ubuntu-raspbian) wasn't too bad, either: -```sh +```command sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo apt-key add - curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list @@ -207,18 +207,18 @@ sudo apt install caddy ``` Then I just need to put my configuration into the default `Caddyfile`, including the required `.well-known` delegation piece from earlier. -``` -$ sudo vi /etc/caddy/Caddyfile +```caddy {linenos=true} +# /etc/caddy/Caddyfile matrix.bowdre.net { - reverse_proxy /_matrix/* http://localhost:8008 - reverse_proxy /_synapse/client/* http://localhost:8008 + reverse_proxy /_matrix/* http://localhost:8008 + reverse_proxy /_synapse/client/* http://localhost:8008 } bowdre.net { - route { - respond /.well-known/matrix/server `{"m.server": "matrix.bowdre.net:443"}` - redir https://virtuallypotato.com - } + route { + respond /.well-known/matrix/server `{"m.server": "matrix.bowdre.net:443"}` + redir https://virtuallypotato.com + } } ``` There's a lot happening in that 11-line `Caddyfile`, but it's not complicated by any means. The `matrix.bowdre.net` section is pretty much exactly yanked from the sample config, and it's going to pass any requests that start like `matrix.bowdre.net/_matrix/` or `matrix.bowdre.net/_synapse/client/` through to the Synapse server listening locally on port `8008`. Caddy will automatically request and apply a Let's Encrypt or ZeroSSL cert for any server names spelled out in the config - very slick! @@ -228,15 +228,15 @@ I set up the `bowdre.net` section to return the appropriate JSON string to tell (I wouldn't need that section at all if I were using a separate web server for `bowdre.net`; instead, I'd basically just add that `respond /.well-known/matrix/server` line to that other server's config.) Now to enable the `caddy` service, start it, and restart it so that it loads the new config: -``` +```command sudo systemctl enable caddy sudo systemctl start caddy sudo systemctl restart caddy ``` If I repeat my `nmap` scan from earlier, I'll see that the HTTP and HTTPS ports are now open. The server still isn't actually serving anything on those ports yet, but at least it's listening. -``` -$ nmap -Pn matrix.bowdre.net +```command-session +nmap -Pn matrix.bowdre.net Starting Nmap 7.70 ( https://nmap.org ) at 2021-06-27 13:44 CDT Nmap scan report for matrix.bowdre.net (150.136.6.180) Host is up (0.034s latency). @@ -265,56 +265,58 @@ Okay, let's actually serve something up now. #### Docker setup Before I can get on with [deploying Synapse in Docker](https://hub.docker.com/r/matrixdotorg/synapse), I first need to [install Docker](https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository) on the system: -```sh +```command-session sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg \ lsb-release - +``` +```command curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg - +``` +```command-session echo \ - "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ - $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null - + "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ + $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null +``` +```command sudo apt update - sudo apt install docker-ce docker-ce-cli containerd.io ``` I'll also [install Docker Compose](https://docs.docker.com/compose/install/#install-compose): -```sh +```command sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose - sudo chmod +x /usr/local/bin/docker-compose ``` And I'll add my `ubuntu` user to the `docker` group so that I won't have to run every docker command with `sudo`: -``` +```command sudo usermod -G docker -a ubuntu ``` I'll log out and back in so that the membership change takes effect, and then test both `docker` and `docker-compose` to make sure they're working: -``` -$ docker --version +```command-session +docker --version Docker version 20.10.7, build f0df350 - -$ docker-compose --version +``` +```command-session +docker-compose --version docker-compose version 1.29.2, build 5becea4c ``` #### Synapse setup Now I'll make a place for the Synapse installation to live, including a `data` folder that will be mounted into the container: -``` +```command sudo mkdir -p /opt/matrix/synapse/data cd /opt/matrix/synapse ``` And then I'll create the compose file to define the deployment: -```yaml -$ sudo vi docker-compose.yml +```yaml {linenos=true} +# /opt/matrix/synapse/docker-compose.yaml services: synapse: container_name: "synapse" @@ -328,8 +330,8 @@ services: Before I can fire this up, I'll need to generate an initial configuration as [described in the documentation](https://hub.docker.com/r/matrixdotorg/synapse). Here I'll specify the server name that I'd like other Matrix servers to know mine by (`bowdre.net`): -```sh -$ docker run -it --rm \ +```command-session +docker run -it --rm \ -v "/opt/matrix/synapse/data:/data" \ -e SYNAPSE_SERVER_NAME=bowdre.net \ -e SYNAPSE_REPORT_STATS=yes \ @@ -373,15 +375,15 @@ so that I can create a user account without fumbling with the CLI. I'll be sure There are a bunch of other useful configurations that can be made here, but these will do to get things going for now. Time to start it up: -``` -$ docker-compose up -d +```command-session +docker-compose up -d Creating network "synapse_default" with the default driver Creating synapse ... done ``` And use `docker ps` to confirm that it's running: -``` -$ docker ps +```command-session +docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 573612ec5735 matrixdotorg/synapse "/start.py" 25 seconds ago Up 23 seconds (healthy) 8009/tcp, 127.0.0.1:8008->8008/tcp, 8448/tcp synapse ``` @@ -400,6 +402,7 @@ And I can view the JSON report at the bottom of the page to confirm that it's co "m.server": "matrix.bowdre.net:443", "CacheExpiresAt": 0 }, +} ``` Now I can fire up my [Matrix client of choice](https://element.io/get-started)), specify my homeserver using its full FQDN, and [register](https://app.element.io/#/register) a new user account: @@ -414,15 +417,13 @@ All in, I'm pretty pleased with how this little project turned out, and I learne ### Update: Updating After a while, it's probably a good idea to update both the Ubntu server and the Synapse container running on it. Updating the server itself is as easy as: -```sh +```command sudo apt update sudo apt upgrade -# And, if needed: -sudo reboot ``` Here's what I do to update the container: -```sh +```bash # Move to the working directory cd /opt/matrix/synapse # Pull a new version of the synapse image diff --git a/content/posts/finding-the-most-popular-ips-in-a-log-file/index.md b/content/posts/finding-the-most-popular-ips-in-a-log-file/index.md index 98f6c66..9266007 100644 --- a/content/posts/finding-the-most-popular-ips-in-a-log-file/index.md +++ b/content/posts/finding-the-most-popular-ips-in-a-log-file/index.md @@ -14,32 +14,32 @@ I found myself with a sudden need for parsing a Linux server's logs to figure ou ### Find IP-ish strings This will get you all occurrences of things which look vaguely like IPv4 addresses: -```shell +```command grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT ``` (It's not a perfect IP address regex since it would match things like `987.654.321.555` but it's close enough for my needs.) ### Filter out `localhost` The log likely include a LOT of traffic to/from `127.0.0.1` so let's toss out `localhost` by piping through `grep -v "127.0.0.1"` (`-v` will do an inverse match - only return results which *don't* match the given expression): -```shell +```command grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" ``` ### Count up the duplicates Now we need to know how many times each IP shows up in the log. We can do that by passing the output through `uniq -c` (`uniq` will filter for unique entries, and the `-c` flag will return a count of how many times each result appears): -```shell +```command grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c ``` ### Sort the results We can use `sort` to sort the results. `-n` tells it sort based on numeric rather than character values, and `-r` reverses the list so that the larger numbers appear at the top: -```shell +```command grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r ``` ### Top 5 And, finally, let's use `head -n 5` to only get the first five results: -```shell +```command grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5 ``` @@ -47,7 +47,7 @@ grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' ACCESS_LOG.TXT | gre You know how old log files get rotated and compressed into files like `logname.1.gz`? I *very* recently learned that there are versions of the standard Linux text manipulation tools which can work directly on compressed log files, without having to first extract the files. I'd been doing things the hard way for years - no longer, now that I know about `zcat`, `zdiff`, `zgrep`, and `zless`! So let's use a `for` loop to iterate through 20 of those compressed logs, and use `date -r [filename]` to get the timestamp for each log as we go: -```bash +```command for i in {1..20}; do date -r ACCESS_LOG.$i.gz; zgrep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' \ACCESS_LOG.log.$i.gz | grep -v "127.0.0.1" | uniq -c | sort -n -r | head -n 5; done ``` Nice! \ No newline at end of file diff --git a/content/posts/fixing-403-error-ssc-8-6-vra-idm/index.md b/content/posts/fixing-403-error-ssc-8-6-vra-idm/index.md index 4e527c1..02a4b4c 100644 --- a/content/posts/fixing-403-error-ssc-8-6-vra-idm/index.md +++ b/content/posts/fixing-403-error-ssc-8-6-vra-idm/index.md @@ -39,8 +39,8 @@ ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verif ``` Further, attempting to pull down that URL with `curl` also failed: -```sh -root@ssc [ ~ ]# curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery +```commandroot-session +curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery curl: (60) SSL certificate problem: self signed certificate in certificate chain More details here: https://curl.se/docs/sslcerts.html @@ -52,7 +52,7 @@ how to fix it, please visit the web page mentioned above. In my homelab, I am indeed using self-signed certificates. I also encountered the same issue in my lab at work, though, and I'm using certs issued by our enterprise CA there. I had run into a similar problem with previous versions of SSC, but the [quick-and-dirty workaround to disable certificate verification](https://communities.vmware.com/t5/VMware-vRealize-Discussions/SaltStack-Config-Integration-show-Blank-Page/td-p/2863973) doesn't seem to work anymore. ### The Solution -Clearly I needed to import either the vRA system's certificate (for my homelab) or the certificate chain for my enterprise CA (for my work environment) into SSC's certificate store so that it will trust vRA. But how? +Clearly I needed to import either the vRA system's certificate (for my homelab) or the certificate chain for my enterprise CA (for my work environment) into SSC's certificate store so that it will trust vRA. But how? I fumbled around for a bit and managed to get the required certs added to the system certificate store so that my `curl` test would succeed, but trying to access the SSC web UI still gave me a big middle finger. I eventually found [this documentation](https://docs.vmware.com/en/VMware-vRealize-Automation-SaltStack-Config/8.6/install-configure-saltstack-config/GUID-21A87CE2-8184-4F41-B71B-0FCBB93F21FC.html#troubleshooting-saltstack-config-environments-with-vrealize-automation-that-use-selfsigned-certificates-3) which describes how to configure SSC to work with self-signed certs, and it held the missing detail of how to tell the SaltStack Returner-as-a-Service (RaaS) component that it should use that system certificate store. @@ -61,21 +61,21 @@ So here's what I did to get things working in my homelab: ![Exporting the self-signed CA cert](20211105_export_selfsigned_ca.png) 2. Open the file in a text editor, and copy the contents into a new file on the SSC appliance. I used `~/vra.crt`. 3. Append the certificate to the end of the system `ca-bundle.crt`: -```sh +```commandroot cat > /etc/pki/tls/certs/ca-bundle.crt ``` 4. Test that I can now `curl` from vRA without a certificate error: -```sh -root@ssc [ ~ ]# curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery +```commandroot-session +curl https://vra.lab.bowdre.net/csp/gateway/am/api/auth/discovery {"timestamp":1636139143260,"type":"CLIENT_ERROR","status":"400 BAD_REQUEST","error":"Bad Request","serverMessage":"400 BAD_REQUEST \"Required String parameter 'state' is not present\""} ``` -5. Edit `/usr/lib/systemd/system/raas.service` to update the service definition so it will look to the `ca-bundle.crt` file by adding -``` +5. Edit `/usr/lib/systemd/system/raas.service` to update the service definition so it will look to the `ca-bundle.crt` file by adding +```cfg Environment=REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt ``` above the `ExecStart` line: -```sh -root@ssc [ ~ ]# cat /usr/lib/systemd/system/raas.service +```cfg {linenos=true,hl_lines=16} +# /usr/lib/systemd/system/raas.service [Unit] Description=The SaltStack Enterprise API Server After=network.target @@ -97,7 +97,7 @@ TimeoutStopSec=90 WantedBy=multi-user.target ``` 6. Stop and restart the `raas` service: -```sh +```command systemctl daemon-reload systemctl stop raas systemctl start raas @@ -110,7 +110,7 @@ systemctl start raas The steps for doing this at work with an enterprise CA were pretty similar, with just slightly-different steps 1 and 2: 1. Access the enterprise CA and download the CA chain, which came in `.p7b` format. 2. Use `openssl` to extract the individual certificates: -```sh +```command openssl pkcs7 -inform PEM -outform PEM -in enterprise-ca-chain.p7b -print_certs > enterprise-ca-chain.pem ``` Copy it to the SSC appliance, and then pick up with Step 3 above. diff --git a/content/posts/free-serverless-url-shortener-google-cloud-run/index.md b/content/posts/free-serverless-url-shortener-google-cloud-run/index.md index 0f863b4..8b0814a 100644 --- a/content/posts/free-serverless-url-shortener-google-cloud-run/index.md +++ b/content/posts/free-serverless-url-shortener-google-cloud-run/index.md @@ -30,7 +30,7 @@ At this point, I was ready to actually kick off the deployment. Ahmet made this ![Authorize Cloud Shell prompt](20210820_authorize_cloud_shell.png) -The script prompted me to select a project and a region, and then asked for the Sheet ID that I copied earlier. +The script prompted me to select a project and a region, and then asked for the Sheet ID that I copied earlier. ![Cloud Shell deployment](20210820_cloud_shell.png) ### Grant access to the Sheet @@ -82,10 +82,9 @@ And now I can hand out handy-dandy short links! | Link | Description| | --- | --- | -| [go.bowdre.net/ghia](https://go.bowdre.net/ghia) | 1974 VW Karmann Ghia project | +| [go.bowdre.net/coso](https://go.bowdre.net/coso) | Follow me on CounterSocial | | [go.bowdre.net/conedoge](https://go.bowdre.net/conedoge) | 2014 Subaru BRZ autocross videos | -| [go.bowdre.net/matrix](https://go.bowdre.net/matrix) | Chat with me on Matrix | -| [go.bowdre.net/twits](https://go.bowdre.net/twits) | Follow me on Twitter | -| [go.bowdre.net/stadia](https://go.bowdre.net/stadia) | Game with me on Stadia | +| [go.bowdre.net/cooltechshit](https://go.bowdre.net/cooltechshit) | A collection of cool tech shit (references and resources) | +| [go.bowdre.net/stuffiuse](https://go.bowdre.net/stuffiuse) | Things that I use (and think you should use too) | | [go.bowdre.net/shorterer](https://go.bowdre.net/shorterer) | This post! | diff --git a/content/posts/getting-started-vra-rest-api/index.md b/content/posts/getting-started-vra-rest-api/index.md index 6f8e5e0..8b67db8 100644 --- a/content/posts/getting-started-vra-rest-api/index.md +++ b/content/posts/getting-started-vra-rest-api/index.md @@ -44,7 +44,7 @@ After hitting **Execute**, the Swagger UI will populate the *Responses* section ![curl request format](login_controller_3.png) So I could easily replicate this using the `curl` utility by just copying and pasting the following into a shell: -```shell +```command-session curl -X 'POST' \ 'https://vra.lab.bowdre.net/csp/gateway/am/api/login' \ -H 'accept: */*' \ @@ -175,7 +175,7 @@ As you can see, Swagger can really help to jump-start the exploration of a new A [HTTPie](https://httpie.io/) is a handy command-line utility optimized for interacting with web APIs. This will make things easier as I dig deeper. Installing the [Debian package](https://httpie.io/docs/cli/debian-and-ubuntu) is a piece of ~~cake~~ _pie_[^pie]: -```shell +```command curl -SsL https://packages.httpie.io/deb/KEY.gpg | sudo apt-key add - sudo curl -SsL -o /etc/apt/sources.list.d/httpie.list https://packages.httpie.io/deb/httpie.list sudo apt update @@ -183,8 +183,8 @@ sudo apt install httpie ``` Once installed, running `http` will give me a quick overview of how to use this new tool: -```shell {hl_lines=[3]} -; http +```command-session +http usage: http [METHOD] URL [REQUEST_ITEM ...] @@ -198,12 +198,12 @@ HTTPie cleverly interprets anything passed after the URL as a [request item](htt > Each request item is simply a key/value pair separated with the following characters: `:` (headers), `=` (data field, e.g., JSON, form), `:=` (raw data field), `==` (query parameters), `@` (file upload). So my earlier request for an authentication token becomes: -```shell +```command https POST vra.lab.bowdre.net/csp/gateway/am/api/login username='vra' password='********' domain='lab.bowdre.net' ``` {{% notice tip "Working with Self-Signed Certificates" %}} If your vRA endpoint is using a self-signed or otherwise untrusted certificate, pass the HTTPie option `--verify=no` to ignore certificate errors: -``` +```command https --verify=no POST [URL] [REQUEST_ITEMS] ``` {{% /notice %}} @@ -211,17 +211,17 @@ https --verify=no POST [URL] [REQUEST_ITEMS] Running that will return a bunch of interesting headers but I'm mainly interested in the response body: ```json { - "cspAuthToken": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlhLFNUPVNvZmlhLEM9QkciLCJpYXQiOjE2NTQwMjQw[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-ErjC6F3c2mV1qIqES2oZbEpjxar16ZVSPshIaOoWRXe5uZB21tkuwVMgZuuwgmpliG_JBa1Y6Oh0FZBbI7o0ERro9qOW-s2npz4Csv5FwcXt0fa4esbXXIKINjqZMh9NDDb23bUabSag" + "cspAuthToken": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlhLFNUPVNvZmlhLEM9QkciLCJpYXQiOjE2NTQwMjQw[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-ErjC6F3c2mV1qIqES2oZbEpjxar16ZVSPshIaOoWRXe5uZB21tkuwVMgZuuwgmpliG_JBa1Y6Oh0FZBbI7o0ERro9qOW-s2npz4Csv5FwcXt0fa4esbXXIKINjqZMh9NDDb23bUabSag" } ``` There's the auth token[^token] that I'll need for subsequent requests. I'll store that in a variable so that it's easier to wield: -```shell +```command token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IjI4NDY0MjAzMzA2NDQwMTQ2NDQifQ.eyJpc3MiOiJDTj1QcmVsdWRlIElkZW50aXR5IFNlcnZpY2UsT1U9Q01CVSxPPVZNd2FyZSxMPVNvZmlhLFNUPVNvZmlhLEM9QkciLCJpYXQiOjE2NTQwMjQw[...]HBOQQwEepXTNAaTv9gWMKwvPzktmKWyJFmC64FGomRyRyWiJMkLy3xmvYQERwxaDj_15-ErjC6F3c2mV1qIqES2oZbEpjxar16ZVSPshIaOoWRXe5uZB21tkuwVMgZuuwgmpliG_JBa1Y6Oh0FZBbI7o0ERro9qOW-s2npz4Csv5FwcXt0fa4esbXXIKINjqZMh9NDDb23bUabSag ``` So now if I want to find out which images have been configured in vRA, I can ask: -```shell +```command https GET vra.lab.bowdre.net/iaas/api/images "Authorization: Bearer $token" ``` {{% notice note "Request Items" %}} @@ -229,80 +229,80 @@ Remember from above that HTTPie will automatically insert key/value pairs separa {{% /notice %}} And I'll get back some headers followed by an JSON object detailing the defined image mappings broken up by region: -```json {hl_lines=[11,14,37,40,53,56]} +```json {linenos=true,hl_lines=[11,14,37,40,53,56]} { - "content": [ - { - "_links": { - "region": { - "href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f" - } - }, - "externalRegionId": "Datacenter:datacenter-39056", - "mapping": { - "Photon 4": { - "_links": { - "region": { - "href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f" - } - }, - "cloudConfig": "", - "constraints": [], - "description": "photon-arm", - "externalId": "50023810-ae56-3c58-f374-adf6e0645886", - "externalRegionId": "Datacenter:datacenter-39056", - "id": "8885e87d8a5898cf12b5abc3e5c715e5a65f7179", - "isPrivate": false, - "name": "photon-arm", - "osFamily": "LINUX" - } - } - }, - { - "_links": { - "region": { - "href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" - } - }, - "externalRegionId": "Datacenter:datacenter-1001", - "mapping": { - "Photon 4": { - "_links": { - "region": { - "href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" - } - }, - "cloudConfig": "", - "constraints": [], - "description": "photon", - "externalId": "50028cf1-88b8-52e8-58a1-b8354d4207b0", - "externalRegionId": "Datacenter:datacenter-1001", - "id": "d417648249e9740d7561188fa2a3a3ab4e8ccf85", - "isPrivate": false, - "name": "photon", - "osFamily": "LINUX" - }, - "Windows Server 2019": { - "_links": { - "region": { - "href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" - } - }, - "cloudConfig": "", - "constraints": [], - "description": "ws2019", - "externalId": "500235ad-1022-fec3-8ad1-00433beee103", - "externalRegionId": "Datacenter:datacenter-1001", - "id": "7e05f4e57ac55135cf7a7f8b951aa8ccfcc335d8", - "isPrivate": false, - "name": "ws2019", - "osFamily": "WINDOWS" - } - } + "content": [ + { + "_links": { + "region": { + "href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f" } - ], - "numberOfElements": 2, - "totalElements": 2 + }, + "externalRegionId": "Datacenter:datacenter-39056", + "mapping": { + "Photon 4": { + "_links": { + "region": { + "href": "/iaas/api/regions/3617c011-39db-466e-a7f3-029f4523548f" + } + }, + "cloudConfig": "", + "constraints": [], + "description": "photon-arm", + "externalId": "50023810-ae56-3c58-f374-adf6e0645886", + "externalRegionId": "Datacenter:datacenter-39056", + "id": "8885e87d8a5898cf12b5abc3e5c715e5a65f7179", + "isPrivate": false, + "name": "photon-arm", + "osFamily": "LINUX" + } + } + }, + { + "_links": { + "region": { + "href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" + } + }, + "externalRegionId": "Datacenter:datacenter-1001", + "mapping": { + "Photon 4": { + "_links": { + "region": { + "href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" + } + }, + "cloudConfig": "", + "constraints": [], + "description": "photon", + "externalId": "50028cf1-88b8-52e8-58a1-b8354d4207b0", + "externalRegionId": "Datacenter:datacenter-1001", + "id": "d417648249e9740d7561188fa2a3a3ab4e8ccf85", + "isPrivate": false, + "name": "photon", + "osFamily": "LINUX" + }, + "Windows Server 2019": { + "_links": { + "region": { + "href": "/iaas/api/regions/c0d2a662-9ee5-4a27-9a9e-e92a72668136" + } + }, + "cloudConfig": "", + "constraints": [], + "description": "ws2019", + "externalId": "500235ad-1022-fec3-8ad1-00433beee103", + "externalRegionId": "Datacenter:datacenter-1001", + "id": "7e05f4e57ac55135cf7a7f8b951aa8ccfcc335d8", + "isPrivate": false, + "name": "ws2019", + "osFamily": "WINDOWS" + } + } + } + ], + "numberOfElements": 2, + "totalElements": 2 } ``` This doesn't give me the *name* of the regions, but I could use the `_links.region.href` data to quickly match up images which exist in a given region.[^foreshadowing] @@ -376,7 +376,7 @@ I'll head into **Library > Actions** to create a new action inside my `com.virtu | `configurationName` | `string` | Name of Configuration | | `variableName` | `string` | Name of desired variable inside Configuration | -```javascript +```javascript {linenos=true} /* JavaScript: getConfigValue action Inputs: path (string), configurationName (string), variableName (string) @@ -396,7 +396,7 @@ Next, I'll create another action in my `com.virtuallypotato.utility` module whic ![vraLogin action](vraLogin_action.png) -```javascript +```javascript {linenos=true} /* JavaScript: vraLogin action Inputs: none @@ -428,7 +428,7 @@ I like to clean up after myself so I'm also going to create a `vraLogout` action |:--- |:--- |:--- | | `token` | `string` | Auth token of the session to destroy | -```javascript +```javascript {linenos=true} /* JavaScript: vraLogout action Inputs: token (string) @@ -458,7 +458,7 @@ My final "utility" action for this effort will run in between `vraLogin` and `vr |`uri`|`string`|Path to API controller (`/iaas/api/flavor-profiles`)| |`content`|`string`|Any additional data to pass with the request| -```javascript +```javascript {linenos=true} /* JavaScript: vraExecute action Inputs: token (string), method (string), uri (string), content (string) @@ -496,7 +496,7 @@ This action will: Other actions wanting to interact with the vRA REST API will follow the same basic formula, though with some more logic and capability baked in. Anyway, here's my first swing: -```JavaScript +```JavaScript {linenos=true} /* JavaScript: vraTester action Inputs: none @@ -513,7 +513,7 @@ Pretty simple, right? Let's see if it works: ![vraTester action](vraTester_action.png) It did! Though that result is a bit hard to parse visually, so I'm going to prettify it a bit: -```json {hl_lines=[17,35,56,74]} +```json {linenos=true,hl_lines=[17,35,56,74]} [ { "tags": [], @@ -609,7 +609,7 @@ This action will basically just repeat the call that I tested above in `vraTeste ![vraGetZones action](vraGetZones_action.png) -```javascript +```javascript {linenos=true} /* JavaScript: vraGetZones action Inputs: none @@ -639,7 +639,7 @@ Oh, and the whole thing is wrapped in a conditional so that the code only execut |:--- |:--- |:--- | | `zoneName` | `string` | The name of the Zone selected in the request form | -```javascript +```javascript {linenos=true} /* JavaScript: vraGetImages action Inputs: zoneName (string) Return type: array/string @@ -708,7 +708,7 @@ Next I'll repeat the same steps to create a new `image` input. This time, though ![Binding the input](image_input.png) The full code for my template now looks like this: -```yaml +```yaml {linenos=true} formatVersion: 1 inputs: zoneName: diff --git a/content/posts/gitea-self-hosted-git-server/index.md b/content/posts/gitea-self-hosted-git-server/index.md index 69bd209..ff96249 100644 --- a/content/posts/gitea-self-hosted-git-server/index.md +++ b/content/posts/gitea-self-hosted-git-server/index.md @@ -50,7 +50,7 @@ I've described the [process of creating a new instance on OCI in a past post](/f ### Prepare the server Once the server's up and running, I go through the usual steps of applying any available updates: -```bash +```command sudo apt update sudo apt upgrade ``` @@ -58,12 +58,12 @@ sudo apt upgrade #### Install Tailscale And then I'll install Tailscale using their handy-dandy bootstrap script: -```bash +```command curl -fsSL https://tailscale.com/install.sh | sh ``` When I bring up the Tailscale interface, I'll use the `--advertise-tags` flag to identify the server with an [ACL tag](https://tailscale.com/kb/1068/acl-tags/). ([Within my tailnet](/secure-networking-made-simple-with-tailscale/#acls)[^tailnet], all of my other clients are able to connect to devices bearing the `cloud` tag but `cloud` servers can only reach back to other devices for performing DNS lookups.) -```bash +```command sudo tailscale up --advertise-tags "tag:cloud" ``` @@ -72,12 +72,16 @@ sudo tailscale up --advertise-tags "tag:cloud" #### Install Docker Next I install Docker and `docker-compose`: -```bash +```command sudo apt install ca-certificates curl gnupg lsb-release curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg +``` +```command-session echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null +``` +```command sudo apt update sudo apt install docker-ce docker-ce-cli containerd.io docker-compose docker-compose-plugin ``` @@ -85,8 +89,8 @@ sudo apt install docker-ce docker-ce-cli containerd.io docker-compose docker-com #### Configure firewall This server automatically had an iptables firewall rule configured to permit SSH access. For Gitea, I'll also need to configure HTTP/HTTPS access. [As before](/federated-matrix-server-synapse-on-oracle-clouds-free-tier/#firewall-configuration), I need to be mindful of the explicit `REJECT all` rule at the bottom of the `INPUT` chain: -```bash -$ sudo iptables -L INPUT --line-numbers +```command-session +sudo iptables -L INPUT --line-numbers Chain INPUT (policy ACCEPT) num target prot opt source destination 1 ts-input all -- anywhere anywhere @@ -99,15 +103,15 @@ num target prot opt source destination ``` So I'll insert the new rules at line 6: -```bash +```command sudo iptables -L INPUT --line-numbers sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 80 -j ACCEPT sudo iptables -I INPUT 6 -m state --state NEW -p tcp --dport 443 -j ACCEPT ``` And confirm that it did what I wanted it to: -```bash -$ sudo iptables -L INPUT --line-numbers +```command-session +sudo iptables -L INPUT --line-numbers Chain INPUT (policy ACCEPT) num target prot opt source destination 1 ts-input all -- anywhere anywhere @@ -122,8 +126,8 @@ num target prot opt source destination ``` That looks good, so let's save the new rules: -```bash -$ sudo netfilter-persistent save +```command-session +sudo netfilter-persistent save run-parts: executing /usr/share/netfilter-persistent/plugins.d/15-ip4tables save run-parts: executing /usr/share/netfilter-persistent/plugins.d/25-ip6tables save ``` @@ -139,19 +143,19 @@ I'm now ready to move on with installing Gitea itself. I'll start with creating a `git` user. This account will be set as the owner of the data volume used by the Gitea container, but will also (perhaps more importantly) facilitate [SSH passthrough](https://docs.gitea.io/en-us/install-with-docker/#ssh-container-passthrough) into the container for secure git operations. Here's where I create the account and also generate what will become the SSH key used by the git server: -```bash +```command sudo useradd -s /bin/bash -m git sudo -u git ssh-keygen -t ecdsa -C "Gitea Host Key" ``` The `git` user's SSH public key gets added as-is directly to that user's `authorized_keys` file: -```bash +```command sudo -u git cat /home/git/.ssh/id_ecdsa.pub | sudo -u git tee -a /home/git/.ssh/authorized_keys sudo -u git chmod 600 /home/git/.ssh/authorized_keys ``` When other users add their SSH public keys into Gitea's web UI, those will get added to `authorized_keys` with a little something extra: an alternate command to perform git actions instead of just SSH ones: -``` +```cfg command="/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-1",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ``` @@ -160,11 +164,13 @@ No users have added their keys to Gitea just yet so if you look at `/home/git/.s {{% /notice %}} So I'll go ahead and create that extra command: -```bash +```command-session cat <<"EOF" | sudo tee /usr/local/bin/gitea #!/bin/sh ssh -p 2222 -o StrictHostKeyChecking=no git@127.0.0.1 "SSH_ORIGINAL_COMMAND=\"$SSH_ORIGINAL_COMMAND\" $0 $@" EOF +``` +```command sudo chmod +x /usr/local/bin/gitea ``` @@ -174,26 +180,26 @@ So when I use a `git` command to interact with the server via SSH, the commands That takes care of most of the prep work, so now I'm ready to create the `docker-compose.yaml` file which will tell Docker how to host Gitea. I'm going to place this in `/opt/gitea`: -```bash +```command sudo mkdir -p /opt/gitea cd /opt/gitea ``` And I want to be sure that my new `git` user owns the `./data` directory which will be where the git contents get stored: -```bash +```command sudo mkdir data sudo chown git:git -R data ``` Now to create the file: -```bash +```command sudo vi docker-compose.yaml ``` The basic contents of the file came from the [Gitea documentation for Installation with Docker](https://docs.gitea.io/en-us/install-with-docker/), but I also included some (highlighted) additional environment variables based on the [Configuration Cheat Sheet](https://docs.gitea.io/en-us/config-cheat-sheet/): `docker-compose.yaml`: -```yaml {hl_lines=["12-13","19-31",38,43]} +```yaml {linenos=true,hl_lines=["12-13","19-31",38,43]} version: "3" networks: @@ -292,7 +298,7 @@ With the config in place, I'm ready to fire it up: #### Start containers Starting Gitea is as simple as -```bash +```command sudo docker-compose up -d ``` which will spawn both the Gitea server as well as a `postgres` database to back it. @@ -305,7 +311,7 @@ I've [written before](/federated-matrix-server-synapse-on-oracle-clouds-free-tie #### Install Caddy So exactly how simple does Caddy make this? Well let's start with installing Caddy on the system: -```bash +```command sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list @@ -315,12 +321,12 @@ sudo apt install caddy #### Configure Caddy Configuring Caddy is as simple as creating a Caddyfile: -```bash +```command sudo vi /etc/caddy/Caddyfile ``` Within that file, I tell it which fully-qualified domain name(s) I'd like it to respond to (and manage SSL certificates for), as well as that I'd like it to function as a reverse proxy and send the incoming traffic to the same port `3000` that used by the Docker container: -``` +```caddy git.bowdre.net { reverse_proxy localhost:3000 } @@ -330,7 +336,7 @@ That's it. I don't need to worry about headers or ACME configurations or anythin #### Start Caddy All that's left at this point is to start up Caddy: -```bash +```command sudo systemctl enable caddy sudo systemctl start caddy sudo systemctl restart caddy @@ -357,25 +363,26 @@ And then I can log out and log back in with my new non-admin identity! #### Add SSH public key Associating a public key with my new Gitea account will allow me to easily authenticate my pushes from the command line. I can create a new SSH public/private keypair by following [GitHub's instructions](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent): -```shell +```command ssh-keygen -t ed25519 -C "user@example.com" ``` I'll view the contents of the public key - and go ahead and copy the output for future use: -``` -; cat ~/.ssh/id_ed25519.pub +```command-session +cat ~/.ssh/id_ed25519.pub ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5ExSsQfr6pAFBEZ7yx0oljSnpnOixvp8DS26STcx2J user@example.com ``` Back in the Gitea UI, I'll click the user menu up top and select **Settings**, then the *SSH / GPG Keys* tab, and click the **Add Key** button: + ![User menu](user_menu.png) ![Adding a public key](add_key.png) I can give the key a name and then paste in that public key, and then click the lower **Add Key** button to insert the new key. To verify that the SSH passthrough magic I [configured earlier](#prepare-git-user) is working, I can take a look at `git`'s `authorized_keys` file: -```shell{hl_lines=3} -; sudo tail -2 /home/git/.ssh/authorized_keys +```command-session +sudo tail -2 /home/git/.ssh/authorized_keys # gitea public key command="/usr/local/bin/gitea --config=/data/gitea/conf/app.ini serv key-3",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,no-user-rc,restrict ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF5ExSsQfr6pAFBEZ7yx0oljSnpnOixvp8DS26STcx2J user@example.com ``` @@ -388,7 +395,7 @@ I'm already limiting this server's exposure by blocking inbound SSH (except for [Fail2ban](https://www.fail2ban.org/wiki/index.php/Main_Page) can help with that by monitoring log files for repeated authentication failures and then creating firewall rules to block the offender. Installing Fail2ban is simple: -```shell +```command sudo apt update sudo apt install fail2ban ``` @@ -404,22 +411,22 @@ Specifically, I'll want to watch `log/gitea.log` for messages like the following ``` So let's create that filter: -```shell +```command sudo vi /etc/fail2ban/filter.d/gitea.conf ``` -`/etc/fail2ban/filter.d/gitea.conf`: -``` +```cfg +# /etc/fail2ban/filter.d/gitea.conf [Definition] failregex = .*(Failed authentication attempt|invalid credentials).* from ignoreregex = ``` Next I create the jail, which tells Fail2ban what to do: -```shell +```command sudo vi /etc/fail2ban/jail.d/gitea.conf ``` -`/etc/fail2ban/jail.d/gitea.conf`: -``` +```cfg +# /etc/fail2ban/jail.d/gitea.conf [gitea] enabled = true filter = gitea @@ -433,14 +440,14 @@ action = iptables-allports This configures Fail2ban to watch the log file (`logpath`) inside the data volume mounted to the Gitea container for messages which match the pattern I just configured (`gitea`). If a system fails to log in 5 times (`maxretry`) within 1 hour (`findtime`, in seconds) then the offending IP will be banned for 1 day (`bantime`, in seconds). Then I just need to enable and start Fail2ban: -```shell +```command sudo systemctl enable fail2ban sudo systemctl start fail2ban ``` To verify that it's working, I can deliberately fail to log in to the web interface and watch `/var/log/fail2ban.log`: -```shell -; sudo tail -f /var/log/fail2ban.log +```command-session +sudo tail -f /var/log/fail2ban.log 2022-07-17 21:52:26,978 fail2ban.filter [36042]: INFO [gitea] Found ${MY_HOME_IP}| - 2022-07-17 21:52:26 ``` @@ -470,10 +477,10 @@ The real point of this whole exercise was to sync my Obsidian vault to a Git ser Once it's created, the new-but-empty repository gives me instructions on how I can interact with it. Note that the SSH address uses the special `git.tadpole-jazz.ts.net` Tailscale domain name which is only accessible within my tailnet. -![Emtpy repository](empty_repo.png) +![Empty repository](empty_repo.png) Now I can follow the instructions to initialize my local Obsidian vault (stored at `~/obsidian-vault/`) as a git repository and perform my initial push to Gitea: -```shell +```command cd ~/obsidian-vault/ git init git add . diff --git a/content/posts/integrating-phpipam-with-vrealize-automation-8/index.md b/content/posts/integrating-phpipam-with-vrealize-automation-8/index.md index 858f2fe..3416629 100644 --- a/content/posts/integrating-phpipam-with-vrealize-automation-8/index.md +++ b/content/posts/integrating-phpipam-with-vrealize-automation-8/index.md @@ -23,13 +23,13 @@ If you'd just like to import a working phpIPAM integration into your environment Before even worrying about the SDK, I needed to [get a phpIPAM instance ready](https://phpipam.net/documents/installation/). I started with a small (1vCPU/1GB RAM/16GB HDD) VM attached to my "Home" network (`192.168.1.0/24`). I installed Ubuntu 20.04.1 LTS, and then used [this guide](https://computingforgeeks.com/install-and-configure-phpipam-on-ubuntu-debian-linux/) to install phpIPAM. Once phpIPAM was running and accessible via the web interface, I then used `openssl` to generate a self-signed certificate to be used for the SSL API connection: -```shell +```command sudo mkdir /etc/apache2/certificate cd /etc/apache2/certificate/ sudo openssl req -new -newkey rsa:4096 -x509 -sha256 -days 365 -nodes -out apache-certificate.crt -keyout apache.key ``` I edited the apache config file to bind that new certificate on port 443, and to redirect requests on port 80 to port 443: -```xml +```apache {linenos=true} ServerName ipam.lab.bowdre.net Redirect permanent / https://ipam.lab.bowdre.net @@ -54,7 +54,8 @@ After restarting apache, I verified that hitting `http://ipam.lab.bowdre.net` re Remember how I've got a "Home" network as well as [several internal networks](/vmware-home-lab-on-intel-nuc-9#networking) which only exist inside the lab environment? I dropped the phpIPAM instance on the Home network to make it easy to connect to, but it doesn't know how to talk to the internal networks where vRA will actually be deploying the VMs. So I added a static route to let it know that traffic to `172.16.0.0/16` would have to go through the Vyos router at `192.168.1.100`. This is Ubuntu, so I edited `/etc/netplan/99-netcfg-vmware.yaml` to add the `routes` section at the bottom: -```yaml +```yaml {linenos=true,hl_lines="17-20"} +# /etc/netplan/99-netcfg-vmware.yaml network: version: 2 renderer: networkd @@ -76,13 +77,17 @@ network: metric: 100 ``` I then ran `sudo netplan apply` so the change would take immediate effect and confirmed the route was working by pinging the vCenter's interface on the `172.16.10.0/24` network: +```command +sudo netplan apply ``` -john@ipam:~$ sudo netplan apply -john@ipam:~$ ip route +```command-session +ip route default via 192.168.1.1 dev ens160 proto static 172.16.0.0/16 via 192.168.1.100 dev ens160 proto static metric 100 192.168.1.0/24 dev ens160 proto kernel scope link src 192.168.1.14 -john@ipam:~$ ping 172.16.10.12 +``` +```command-session +ping 172.16.10.12 PING 172.16.10.12 (172.16.10.12) 56(84) bytes of data. 64 bytes from 172.16.10.12: icmp_seq=1 ttl=64 time=0.282 ms 64 bytes from 172.16.10.12: icmp_seq=2 ttl=64 time=0.256 ms @@ -94,7 +99,7 @@ rtt min/avg/max/mdev = 0.241/0.259/0.282/0.016 ms ``` Now would also be a good time to go ahead and enable cron jobs so that phpIPAM will automatically scan its defined subnets for changes in IP availability and device status. phpIPAM includes a pair of scripts in `INSTALL_DIR/functions/scripts/`: one for discovering new hosts, and the other for checking the status of previously discovered hosts. So I ran `sudo crontab -e` to edit root's crontab and pasted in these two lines to call both scripts every 15 minutes: -``` +```cron */15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/discoveryCheck.php */15 * * * * /usr/bin/php /var/www/html/phpipam/functions/scripts/pingCheck.php ``` @@ -200,7 +205,7 @@ Now that I know how to talk to phpIPAM via its RESP API, it's time to figure out I downloaded the SDK from [here](https://code.vmware.com/web/sdk/1.1.0/vmware-vrealize-automation-third-party-ipam-sdk). It's got a pretty good [README](https://github.com/jbowdre/phpIPAM-for-vRA8/blob/main/README_VMware.md) which describes the requirements (Java 8+, Maven 3, Python3, Docker, internet access) as well as how to build the package. I also consulted [this white paper](https://docs.vmware.com/en/vRealize-Automation/8.2/ipam_integration_contract_reqs.pdf) which describes the inputs provided by vRA and the outputs expected from the IPAM integration. The README tells you to extract the .zip and make a simple modification to the `pom.xml` file to "brand" the integration: -```xml +```xml {linenos=true,hl_lines="2-4"} phpIPAM phpIPAM integration for vRA @@ -216,7 +221,7 @@ The README tells you to extract the .zip and make a simple modification to the ` You can then kick off the build with `mvn package -PcollectDependencies -Duser.id=${UID}`, which will (eventually) spit out `./target/phpIPAM.zip`. You can then [import the package to vRA](https://docs.vmware.com/en/vRealize-Automation/8.3/Using-and-Managing-Cloud-Assembly/GUID-410899CA-1B02-4507-96AD-DFE622D2DD47.html) and test it against the `httpbin.org` hostname to validate that the build process works correctly. You'll notice that the form includes fields for Username, Password, and Hostname; we'll also need to specify the API app ID. This can be done by editing `./src/main/resources/endpoint-schema.json`. I added an `apiAppId` field: -```json +```json {linenos=true,hl_lines=[12,38]} { "layout":{ "pages":[ @@ -316,7 +321,7 @@ Example payload: ``` The `do_validate_endpoint` function has a handy comment letting us know that's where we'll drop in our code: -```python +```python {linenos=true} def do_validate_endpoint(self, auth_credentials, cert): # Your implemention goes here @@ -327,7 +332,7 @@ def do_validate_endpoint(self, auth_credentials, cert): response = requests.get("https://" + self.inputs["endpointProperties"]["hostName"], verify=cert, auth=(username, password)) ``` The example code gives us a nice start at how we'll get our inputs from vRA. So let's expand that a bit: -```python +```python {linenos=true} def do_validate_endpoint(self, auth_credentials, cert): # Build variables username = auth_credentials["privateKeyId"] @@ -336,19 +341,19 @@ def do_validate_endpoint(self, auth_credentials, cert): apiAppId = self.inputs["endpointProperties"]["apiAppId"] ``` As before, we'll construct the "base" URI by inserting the `hostname` and `apiAppId`, and we'll combine the `username` and `password` into our `auth` variable: -```python +```python {linenos=true} uri = f'https://{hostname}/api/{apiAppId}/ auth = (username, password) ``` I realized that I'd be needing to do the same authentication steps for each one of these operations, so I created a new `auth_session()` function to do the heavy lifting. Other operations will also need to return the authorization token but for this run we really just need to know whether the authentication was successful, which we can do by checking `req.status_code`. -```python +```python {linenos=true} def auth_session(uri, auth, cert): auth_uri = f'{uri}/user/' req = requests.post(auth_uri, auth=auth, verify=cert) return req ``` And we'll call that function from `do_validate_endpoint()`: -```python +```python {linenos=true} # Test auth connection try: response = auth_session(uri, auth, cert) @@ -367,7 +372,7 @@ After completing each operation, run `mvn package -PcollectDependencies -Duser.i Confirm that everything worked correctly by hopping over to the **Extensibility** tab, selecting **Action Runs** on the left, and changing the **User Runs** filter to say *Integration Runs*. ![Extensibility action runs](e4PTJxfqH.png) Select the newest `phpIPAM_ValidateEndpoint` action and make sure it has a happy green *Completed* status. You can also review the Inputs to make sure they look like what you expected: -```json +```json {linenos=true} { "__metadata": { "headers": { @@ -394,7 +399,7 @@ That's one operation in the bank! ### Step 6: 'Get IP Ranges' action So vRA can authenticate against phpIPAM; next, let's actually query to get a list of available IP ranges. This happens in `./src/main/python/get_ip_ranges/source.py`. We'll start by pulling over our `auth_session()` function and flesh it out a bit more to return the authorization token: -```python +```python {linenos=true} def auth_session(uri, auth, cert): auth_uri = f'{uri}/user/' req = requests.post(auth_uri, auth=auth, verify=cert) @@ -404,7 +409,7 @@ def auth_session(uri, auth, cert): return token ``` We'll then modify `do_get_ip_ranges()` with our needed variables, and then call `auth_session()` to get the necessary token: -```python +```python {linenos=true} def do_get_ip_ranges(self, auth_credentials, cert): # Build variables username = auth_credentials["privateKeyId"] @@ -418,7 +423,7 @@ def do_get_ip_ranges(self, auth_credentials, cert): token = auth_session(uri, auth, cert) ``` We can then query for the list of subnets, just like we did earlier: -```python +```python {linenos=true} # Request list of subnets subnet_uri = f'{uri}/subnets/' ipRanges = [] @@ -429,7 +434,7 @@ I decided to add the extra `filter_by=isPool&filter_value=1` argument to the que {{% notice note "Update" %}} I now filter for networks identified by the designated custom field like so: -```python +```python {linenos=true} # Request list of subnets subnet_uri = f'{uri}/subnets/' if enableFilter == "true": @@ -447,7 +452,7 @@ I now filter for networks identified by the designated custom field like so: Now is a good time to consult [that white paper](https://docs.vmware.com/en/VMware-Cloud-services/1.0/ipam_integration_contract_reqs.pdf) to confirm what fields I'll need to return to vRA. That lets me know that I'll need to return `ipRanges` which is a list of `IpRange` objects. `IpRange` requires `id`, `name`, `startIPAddress`, `endIPAddress`, `ipVersion`, and `subnetPrefixLength` properties. It can also accept `description`, `gatewayAddress`, and `dnsServerAddresses` properties, among others. Some of these properties are returned directly by the phpIPAM API, but others will need to be computed on the fly. For instance, these are pretty direct matches: -```python +```python {linenos=true} ipRange['id'] = str(subnet['id']) ipRange['description'] = str(subnet['description']) ipRange['subnetPrefixLength'] = str(subnet['mask']) @@ -458,32 +463,32 @@ ipRange['name'] = f"{str(subnet['subnet'])}/{str(subnet['mask'])}" ``` Working with IP addresses in Python can be greatly simplified by use of the `ipaddress` module, so I added an `import ipaddress` statement near the top of the file. I also added it to `requirements.txt` to make sure it gets picked up by the Maven build. I can then use that to figure out the IP version as well as computing reasonable start and end IP addresses: -```python +```python {linenos=true} network = ipaddress.ip_network(str(subnet['subnet']) + '/' + str(subnet['mask'])) ipRange['ipVersion'] = 'IPv' + str(network.version) ipRange['startIPAddress'] = str(network[1]) ipRange['endIPAddress'] = str(network[-2]) ``` I'd like to try to get the DNS servers from phpIPAM if they're defined, but I also don't want the whole thing to puke if a subnet doesn't have that defined. phpIPAM returns the DNS servers as a semicolon-delineated string; I need them to look like a Python list: -```python +```python {linenos=true} try: ipRange['dnsServerAddresses'] = [server.strip() for server in str(subnet['nameservers']['namesrv1']).split(';')] except: ipRange['dnsServerAddresses'] = [] ``` I can also nest another API request to find which address is marked as the gateway for a given subnet: -```python +```python {linenos=true} gw_req = requests.get(f"{subnet_uri}/{subnet['id']}/addresses/?filter_by=is_gateway&filter_value=1", headers=token, verify=cert) if gw_req.status_code == 200: gateway = gw_req.json()['data'][0]['ip'] ipRange['gatewayAddress'] = gateway ``` And then I merge each of these `ipRange` objects into the `ipRanges` list which will be returned to vRA: -```python +```python {linenos=true} ipRanges.append(ipRange) ``` After rearranging a bit and tossing in some logging, here's what I've got: -```python +```python {linenos=true} for subnet in subnets: ipRange = {} ipRange['id'] = str(subnet['id']) @@ -539,7 +544,7 @@ Next, we need to figure out how to allocate an IP. ### Step 7: 'Allocate IP' action I think we've got a rhythm going now. So we'll dive in to `./src/main/python/allocate_ip/source.py`, create our `auth_session()` function, and add our variables to the `do_allocate_ip()` function. I also created a new `bundle` object to hold the `uri`, `token`, and `cert` items so that I don't have to keep typing those over and over and over. -```python +```python {linenos=true} def auth_session(uri, auth, cert): auth_uri = f'{uri}/user/' req = requests.post(auth_uri, auth=auth, verify=cert) @@ -566,7 +571,7 @@ def do_allocate_ip(self, auth_credentials, cert): } ``` I left the remainder of `do_allocate_ip()` intact but modified its calls to other functions so that my new `bundle` would be included: -```python +```python {linenos=true} allocation_result = [] try: resource = self.inputs["resourceInfo"] @@ -581,7 +586,7 @@ except Exception as e: raise e ``` I also added `bundle` to the `allocate()` function: -```python +```python {linenos=true} def allocate(resource, allocation, context, endpoint, bundle): last_error = None @@ -598,7 +603,7 @@ def allocate(resource, allocation, context, endpoint, bundle): raise last_error ``` The heavy lifting is actually handled in `allocate_in_range()`. Right now, my implementation only supports doing a single allocation so I added an escape in case someone asks to do something crazy like allocate *2* IPs. I then set up my variables: -```python +```python {linenos=true} def allocate_in_range(range_id, resource, allocation, context, endpoint, bundle): if int(allocation['size']) ==1: vmName = resource['name'] @@ -612,7 +617,7 @@ def allocate_in_range(range_id, resource, allocation, context, endpoint, bundle) raise Exception("Not implemented") ``` I construct a `payload` that will be passed to the phpIPAM API when an IP gets allocated to a VM: -```python +```python {linenos=true} payload = { 'hostname': vmName, 'description': f'Reserved by vRA for {owner} at {datetime.now()}' @@ -621,13 +626,13 @@ payload = { That timestamp will be handy when reviewing the reservations from the phpIPAM side of things. Be sure to add an appropriate `import datetime` statement at the top of this file, and include `datetime` in `requirements.txt`. So now we'll construct the URI and post the allocation request to phpIPAM. We tell it which `range_id` to use and it will return the first available IP. -```python +```python {linenos=true} allocate_uri = f'{uri}/addresses/first_free/{str(range_id)}/' allocate_req = requests.post(allocate_uri, data=payload, headers=token, verify=cert) allocate_req = allocate_req.json() ``` Per the white paper, we'll need to return `ipAllocationId`, `ipAddresses`, `ipRangeId`, and `ipVersion` to vRA in an `AllocationResult`. Once again, I'll leverage the `ipaddress` module for figuring the version (and, once again, I'll add it as an import and to the `requirements.txt` file). -```python +```python {linenos=true} if allocate_req['success']: version = ipaddress.ip_address(allocate_req['data']).version result = { @@ -643,7 +648,7 @@ else: return result ``` I also implemented a hasty `rollback()` in case something goes wrong and we need to undo the allocation: -```python +```python {linenos=true} def rollback(allocation_result, bundle): uri = bundle['uri'] token = bundle['token'] @@ -671,7 +676,7 @@ Almost done! ### Step 8: 'Deallocate IP' action The last step is to remove the IP allocation when a vRA deployment gets destroyed. It starts just like the `allocate_ip` action with our `auth_session()` function and variable initialization: -```python +```python {linenos=true} def auth_session(uri, auth, cert): auth_uri = f'{uri}/user/' req = requests.post(auth_uri, auth=auth, verify=cert) @@ -707,7 +712,7 @@ def do_deallocate_ip(self, auth_credentials, cert): } ``` And the `deallocate()` function is basically a prettier version of the `rollback()` function from the `allocate_ip` action: -```python +```python {linenos=true} def deallocate(resource, deallocation, bundle): uri = bundle['uri'] token = bundle['token'] @@ -731,7 +736,7 @@ You can review the full code [here](https://github.com/jbowdre/phpIPAM-for-vRA8/ [2021-02-22 01:36:29,476] [INFO] - Deallocating ip 172.16.40.3 from range 12 ``` And the Outputs section of the Details tab will show: -```json +```json {linenos=true} { "ipDeallocations": [ { diff --git a/content/posts/joining-vms-to-active-directory-in-site-specific-ous-with-vra8/index.md b/content/posts/joining-vms-to-active-directory-in-site-specific-ous-with-vra8/index.md index cca5a3b..8bbd43a 100644 --- a/content/posts/joining-vms-to-active-directory-in-site-specific-ous-with-vra8/index.md +++ b/content/posts/joining-vms-to-active-directory-in-site-specific-ous-with-vra8/index.md @@ -12,7 +12,7 @@ tags: - windows title: Joining VMs to Active Directory in site-specific OUs with vRA8 --- -Connecting a deployed Windows VM to an Active Directory domain is pretty easy; just apply an appropriately-configured [customization spec](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-CAEB6A70-D1CF-446E-BC64-EC42CDB47117.html) and vCenter will take care of it for you. Of course, you'll likely then need to move the newly-created computer object to the correct Organizational Unit so that it gets all the right policies and such. +Connecting a deployed Windows VM to an Active Directory domain is pretty easy; just apply an appropriately-configured [customization spec](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vm_admin.doc/GUID-CAEB6A70-D1CF-446E-BC64-EC42CDB47117.html) and vCenter will take care of it for you. Of course, you'll likely then need to move the newly-created computer object to the correct Organizational Unit so that it gets all the right policies and such. Fortunately, vRA 8 supports adding an Active Directory integration to handle staging computer objects in a designated OU. And vRA 8.3 even [introduced the ability](https://blogs.vmware.com/management/2021/02/whats-new-with-vrealize-automation-8-3-technical-overview.html#:~:text=New%20Active%20Directory%20Cloud%20Template%20Properties) to let blueprints override the relative DN path. That will be helpful in my case since I'll want the servers to be placed in different OUs depending on which site they get deployed to: @@ -42,17 +42,17 @@ As mentioned above, I'll leverage the customization specs in vCenter to handle t First, the workgroup spec, appropriately called `vra-win-workgroup`: ![Workgroup spec](AzAna5Dda.png) -It's about as basic as can be, including using DHCP for the network configuration (which doesn't really matter since the VM will eventually get a [static IP assigned from {php}IPAM](integrating-phpipam-with-vrealize-automation-8)). +It's about as basic as can be, including using DHCP for the network configuration (which doesn't really matter since the VM will eventually get a [static IP assigned from {php}IPAM](integrating-phpipam-with-vrealize-automation-8)). `vra-win-domain` is basically the same, with one difference: ![Domain spec](0ZYcORuiU.png) - + Now to reference these specs from a cloud template... ### Cloud template I want to make sure that users requesting a deployment are able to pick whether or not a system should be joined to the domain, so I'm going to add that as an input option on the template: -```yaml +```yaml {linenos=true} inputs: [...] adJoin: @@ -62,11 +62,11 @@ inputs: [...] ``` -This new `adJoin` input is a boolean so it will appear on the request form as a checkbox, and it will default to `true`; we'll assume that any Windows deployment should be automatically joined to AD unless this option gets unchecked. +This new `adJoin` input is a boolean so it will appear on the request form as a checkbox, and it will default to `true`; we'll assume that any Windows deployment should be automatically joined to AD unless this option gets unchecked. In the `resources` section of the template, I'll set a new property called `ignoreActiveDirectory` to be the inverse of the `adJoin` input; that will tell the AD integration not to do anything if the box to join the VM to the domain is unchecked. I'll also use `activeDirectory: relativeDN` to insert the appropriate site code into the DN where the computer object will be created. And, finally, I'll reference the `customizationSpec` and use [cloud template conditional syntax](https://docs.vmware.com/en/vRealize-Automation/8.4/Using-and-Managing-Cloud-Assembly/GUID-12F0BC64-6391-4E5F-AA48-C5959024F3EB.html#conditions-4) to apply the correct spec based on whether it's a domain or workgroup deployment. (These conditionals take the pattern `'${conditional-expresion ? true-value : false-value}'`). -```yaml +```yaml {linenos=true} resources: Cloud_vSphere_Machine_1: type: Cloud.vSphere.Machine @@ -81,7 +81,7 @@ resources: Here's the current cloud template in its entirety: -```yaml +```yaml {linenos=true} formatVersion: 1 inputs: site: @@ -214,7 +214,7 @@ I don't need to do anything else here since I'm not trying to do any fancy logic Now to submit the request through Service Broker to see if this actually works: ![Submitting the request](20210721-test-deploy-request.png) -After a few minutes, I can go into Cloud Assembly and navigate to **Extensibility > Activity > Actions Runs** and look at the **Integration Runs** to see if the `ad_machine` action has completed yet. +After a few minutes, I can go into Cloud Assembly and navigate to **Extensibility > Activity > Actions Runs** and look at the **Integration Runs** to see if the `ad_machine` action has completed yet. ![Successful ad_machine action](20210721-successful-ad_machine.png) Looking good! And once the deployment completes, I can look at the VM in vCenter to see that it has registered a fully-qualified DNS name since it was automatically joined to the domain: @@ -224,9 +224,9 @@ I can also repeat the test for a VM deployed to the `DRE` site just to confirm t ![Another domain-joined VM](20210721-vm-joined-2.png) And I'll fire off another deployment with the `adJoin` box *unchecked* to test that I can also skip the AD configuration completely: -![VM not joined to the domain](20210721-vm-not-joined.png) +![VM not joined to the domain](20210721-vm-not-joined.png) ### Conclusion -Confession time: I had actually started writing this posts weeks ago. At that point, my efforts to bend the built-in AD integration to my will had been fairly unsuccessful, so I was instead working on a custom vRO workflow to accomplish the same basic thing. I circled back to try the AD integration again after upgrading the vRA environment to the latest 8.4.2 release, and found that it actually works quite well now. So I happily scrapped my ~50 lines of messy vRO JavaScript in favor of *just three lines* of YAML in the cloud template. +Confession time: I had actually started writing this posts weeks ago. At that point, my efforts to bend the built-in AD integration to my will had been fairly unsuccessful, so I was instead working on a custom vRO workflow to accomplish the same basic thing. I circled back to try the AD integration again after upgrading the vRA environment to the latest 8.4.2 release, and found that it actually works quite well now. So I happily scrapped my ~50 lines of messy vRO JavaScript in favor of *just three lines* of YAML in the cloud template. I love it when things work out! \ No newline at end of file diff --git a/content/posts/k8s-on-vsphere-node-template-with-packer/index.md b/content/posts/k8s-on-vsphere-node-template-with-packer/index.md index b3ef6d1..b74fe22 100644 --- a/content/posts/k8s-on-vsphere-node-template-with-packer/index.md +++ b/content/posts/k8s-on-vsphere-node-template-with-packer/index.md @@ -54,7 +54,7 @@ Sounds pretty cool, right? I'm not going to go too deep into "how to Packer" in ## Prerequisites ### Install Packer Before being able to *use* Packer, you have to install it. On Debian/Ubuntu Linux, this process consists of adding the HashiCorp GPG key and software repository, and then simply installing the package: -```shell +```command curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main" sudo apt-get update && sudo apt-get install packer @@ -113,7 +113,7 @@ Let's quickly run through that build process, and then I'll back up and examine ### `ubuntu-k8s.pkr.hcl` #### `packer` block The first block in the file tells Packer about the minimum version requirements for Packer as well as the external plugins used for the build: -``` +``` {linenos=true} // BLOCK: packer // The Packer configuration. packer { @@ -134,7 +134,7 @@ As I mentioned above, I'll be using the official [`vsphere` plugin](https://gith #### `data` block This section would be used for loading information from various data sources, but I'm only using it for the `sshkey` plugin (as mentioned above). -```text +``` {linenos=true} // BLOCK: data // Defines data sources. data "sshkey" "install" { @@ -147,7 +147,7 @@ This will generate an ECDSA keypair, and the public key will include the identif #### `locals` block Locals are a type of Packer variable which aren't explicitly declared in the `variables.pkr.hcl` file. They only exist within the context of a single build (hence the "local" name). Typical Packer variables are static and don't support string manipulation; locals, however, do support expressions that can be used to change their value on the fly. This makes them very useful when you need to combine variables into a single string or concatenate lists of SSH public keys (such as in the highlighted lines): -```text {hl_lines=[10,17]} +```text {linenos=true,hl_lines=[10,17]} // BLOCK: locals // Defines local variables. locals { @@ -182,7 +182,7 @@ The `source` block tells the `vsphere-iso` builder how to connect to vSphere, wh You'll notice that most of this is just mapping user-defined variables (with the `var.` prefix) to properties used by `vsphere-iso`: -```text +```text {linenos=true} // BLOCK: source // Defines the builder configuration blocks. source "vsphere-iso" "ubuntu-k8s" { @@ -284,7 +284,7 @@ source "vsphere-iso" "ubuntu-k8s" { #### `build` block This block brings everything together and executes the build. It calls the `source.vsphere-iso.ubuntu-k8s` block defined above, and also ties in a `file` and a few `shell` provisioners. `file` provisioners are used to copy files (like SSL CA certificates) into the VM, while the `shell` provisioners run commands and execute scripts. Those will be handy for the post-deployment configuration tasks, like updating and installing packages. -```text +```text {linenos=true} // BLOCK: build // Defines the builders to run, provisioners, and post-processors. build { @@ -323,7 +323,7 @@ Before looking at the build-specific variable definitions, let's take a quick lo Most of these carry descriptions with them so I won't restate them outside of the code block here: -```text +```text {linenos=true} /* DESCRIPTION: Ubuntu Server 20.04 LTS variables using the Packer Builder for VMware vSphere (vsphere-iso). @@ -724,7 +724,7 @@ The full `variables.pkr.hcl` can be viewed [here](https://github.com/jbowdre/vsp Packer automatically knows to load variables defined in files ending in `*.auto.pkrvars.hcl`. Storing the variable values separately from the declarations in `variables.pkr.hcl` makes it easier to protect sensitive values. So I'll start by telling Packer what credentials to use for connecting to vSphere, and what vSphere resources to deploy to: -```text +```text {linenos=true} /* DESCRIPTION: Ubuntu Server 20.04 LTS Kubernetes node variables used by the Packer Plugin for VMware vSphere (vsphere-iso). @@ -745,7 +745,7 @@ vsphere_folder = "_Templates" ``` I'll then describe the properties of the VM itself: -```text +```text {linenos=true} // Guest Operating System Settings vm_guest_os_language = "en_US" vm_guest_os_keyboard = "us" @@ -771,7 +771,7 @@ common_remove_cdrom = true ``` Then I'll configure Packer to convert the VM to a template once the build is finished: -```text +```text {linenos=true} // Template and Content Library Settings common_template_conversion = true common_content_library_name = null @@ -786,7 +786,7 @@ common_ovf_export_path = "" ``` Next, I'll tell it where to find the Ubuntu 20.04 ISO I downloaded and placed on a datastore, along with the SHA256 checksum to confirm its integrity: -```text +```text {linenos=true} // Removable Media Settings common_iso_datastore = "nuchost-local" iso_url = null @@ -797,7 +797,7 @@ iso_checksum_value = "5035be37a7e9abbdc09f0d257f3e33416c1a0fb322ba860d42d74 ``` And then I'll specify the VM's boot device order, as well as the boot command that will be used for loading the `cloud-init` coniguration into the Ubuntu installer: -```text +```text {linenos=true} // Boot Settings vm_boot_order = "disk,cdrom" vm_boot_wait = "4s" @@ -814,7 +814,7 @@ vm_boot_command = [ Once the installer is booted and running, Packer will wait until the VM is available via SSH and then use these credentials to log in. (How will it be able to log in with those creds? We'll take a look at the `cloud-init` configuration in just a minute...) -```text +```text {linenos=true} // Communicator Settings communicator_port = 22 communicator_timeout = "20m" @@ -832,7 +832,7 @@ ssh_keys = [ Finally, I'll create two lists of scripts that will be run on the VM once the OS install is complete. The `post_install_scripts` will be run immediately after the operating system installation. The `update-packages.sh` script will cause a reboot, and then the set of `pre_final_scripts` will do some cleanup and prepare the VM to be converted to a template. The last bit of this file also designates the desired version of Kubernetes to be installed. -```text +```text {linenos=true} // Provisioner Settings post_install_scripts = [ "scripts/wait-for-cloud-init.sh", @@ -864,7 +864,7 @@ Okay, so we've covered the Packer framework that creates the VM; now let's take See the bits that look `${ like_this }`? Those place-holders will take input from the [`locals` block of `ubuntu-k8s.pkr.hcl`](#locals-block) mentioned above. So that's how all the OS properties will get set, including the hostname, locale, LVM partition layout, username, password, and SSH keys. -```yaml +```yaml {linenos=true} #cloud-config autoinstall: version: 1 @@ -1068,7 +1068,7 @@ You can find all of the scripts [here](https://github.com/jbowdre/vsphere-k8s/tr #### `wait-for-cloud-init.sh` This simply holds up the process until the `/var/lib/cloud//instance/boot-finished` file has been created, signifying the completion of the `cloud-init` process: -```shell +```shell {linenos=true} #!/bin/bash -eu echo '>> Waiting for cloud-init...' while [ ! -f /var/lib/cloud/instance/boot-finished ]; do @@ -1078,7 +1078,7 @@ done #### `cleanup-subiquity.sh` Next I clean up any network configs that may have been created during the install process: -```shell +```shell {linenos=true} #!/bin/bash -eu if [ -f /etc/cloud/cloud.cfg.d/99-installer.cfg ]; then sudo rm /etc/cloud/cloud.cfg.d/99-installer.cfg @@ -1093,7 +1093,7 @@ fi #### `install-ca-certs.sh` The [`file` provisioner](#build-block) mentioned above helpfully copied my custom CA certs to the `/tmp/certs/` folder on the VM; this script will install them into the certificate store: -```shell +```shell {linenos=true} #!/bin/bash -eu echo '>> Installing custom certificates...' sudo cp /tmp/certs/* /usr/local/share/ca-certificates/ @@ -1106,7 +1106,7 @@ sudo /usr/sbin/update-ca-certificates #### `disable-multipathd.sh` This disables `multipathd`: -```shell +```shell {linenos=true} #!/bin/bash -eu sudo systemctl disable multipathd echo 'Disabling multipathd' @@ -1114,7 +1114,7 @@ echo 'Disabling multipathd' #### `disable-release-upgrade-motd.sh` And this one disable the release upgrade notices that would otherwise be displayed upon each login: -```shell +```shell {linenos=true} #!/bin/bash -eu echo '>> Disabling release update MOTD...' sudo chmod -x /etc/update-motd.d/91-release-upgrade @@ -1122,7 +1122,7 @@ sudo chmod -x /etc/update-motd.d/91-release-upgrade #### `persist-cloud-init-net.sh` I want to make sure that this VM keeps the same IP address following the reboot that will come in a few minutes, so I 'll set a quick `cloud-init` option to help make sure that happens: -```shell +```shell {linenos=true} #!/bin/sh -eu echo '>> Preserving network settings...' echo 'manual_cache_clean: True' | sudo tee -a /etc/cloud/cloud.cfg @@ -1131,7 +1131,7 @@ echo 'manual_cache_clean: True' | sudo tee -a /etc/cloud/cloud.cfg #### `configure-sshd.sh` Then I just set a few options for the `sshd` configuration, like disabling root login: -```shell +```shell {linenos=true} #!/bin/bash -eu echo '>> Configuring SSH' sudo sed -i 's/.*PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config @@ -1143,7 +1143,7 @@ sudo sed -i 's/.*PasswordAuthentication.*/PasswordAuthentication yes/' /etc/ssh/ This script is a little longer and takes care of all the Kubernetes-specific settings and packages that will need to be installed on the VM. First I enable the required `overlay` and `br_netfilter` modules: -```shell +```shell {linenos=true} #!/bin/bash -eu echo ">> Installing Kubernetes components..." @@ -1159,7 +1159,7 @@ sudo modprobe br_netfilter ``` Then I'll make some networking tweaks to enable forwarding and bridging: -```shell +```shell {linenos=true} # Configure networking echo ".. configure networking" cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf @@ -1172,7 +1172,7 @@ sudo sysctl --system ``` Next, set up `containerd` as the container runtime: -```shell +```shell {linenos=true} # Setup containerd echo ".. setup containerd" sudo apt-get update && sudo apt-get install -y containerd apt-transport-https jq @@ -1182,7 +1182,7 @@ sudo systemctl restart containerd ``` Then disable swap: -```shell +```shell {linenos=true} # Disable swap echo ".. disable swap" sudo sed -i '/[[:space:]]swap[[:space:]]/ s/^\(.*\)$/#\1/g' /etc/fstab @@ -1190,7 +1190,7 @@ sudo swapoff -a ``` Next I'll install the Kubernetes components and (crucially) `apt-mark hold` them so they won't be automatically upgraded without it being a coordinated change: -```shell +```shell {linenos=true} # Install Kubernetes echo ".. install kubernetes version ${KUBEVERSION}" sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg @@ -1201,7 +1201,7 @@ sudo apt-mark hold kubelet kubeadm kubectl #### `update-packages.sh` Lastly, I'll be sure to update all installed packages (excepting the Kubernetes ones, of course), and then perform a reboot to make sure that any new kernel modules get loaded: -```shell +```shell {linenos=true} #!/bin/bash -eu echo '>> Checking for and installing updates...' sudo apt-get update && sudo apt-get -y upgrade @@ -1214,7 +1214,7 @@ After the reboot, all that's left are some cleanup tasks to get the VM ready to #### `cleanup-cloud-init.sh` I'll start with cleaning up the `cloud-init` state: -```shell +```shell {linenos=true} #!/bin/bash -eu echo '>> Cleaning up cloud-init state...' sudo cloud-init clean -l @@ -1222,7 +1222,7 @@ sudo cloud-init clean -l #### `enable-vmware-customization.sh` And then be (re)enable the ability for VMware to be able to customize the guest successfully: -```shell +```shell {linenos=true} #!/bin/bash -eu echo '>> Enabling legacy VMware Guest Customization...' echo 'disable_vmware_customization: true' | sudo tee -a /etc/cloud/cloud.cfg @@ -1231,7 +1231,7 @@ sudo vmware-toolbox-cmd config set deployPkg enable-custom-scripts true #### `zero-disk.sh` I'll also execute this handy script to free up unused space on the virtual disk. It works by creating a file which completely fills up the disk, and then deleting that file: -```shell +```shell {linenos=true} #!/bin/bash -eu echo '>> Zeroing free space to reduce disk size' sudo sh -c 'dd if=/dev/zero of=/EMPTY bs=1M || true; sync; sleep 1; sync' @@ -1240,7 +1240,7 @@ sudo sh -c 'rm -f /EMPTY; sync; sleep 1; sync' #### `generalize.sh` Lastly, let's do a final run of cleaning up logs, temporary files, and unique identifiers that don't need to exist in a template. This script will also remove the SSH key with the `packer_key` identifier since that won't be needed anymore. -```shell +```shell {linenos=true} #!/bin/bash -eu # Prepare a VM to become a template. @@ -1293,7 +1293,7 @@ sudo rm -f /root/.bash_history ### Kick out the jams (or at least the build) Now that all the ducks are nicely lined up, let's give them some marching orders and see what happens. All I have to do is open a terminal session to the folder containing the `.pkr.hcl` files, and then run the Packer build command: -```shell +```command packer packer build -on-error=abort -force . ``` diff --git a/content/posts/ldaps-authentication-tanzu-community-edition/index.md b/content/posts/ldaps-authentication-tanzu-community-edition/index.md index e7093e8..94720b0 100644 --- a/content/posts/ldaps-authentication-tanzu-community-edition/index.md +++ b/content/posts/ldaps-authentication-tanzu-community-edition/index.md @@ -113,7 +113,7 @@ LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN #### Deploying the cluster That's the only thing I need to manually edit so now I can go ahead and create the cluster with: -``` +```command tanzu management-cluster create tce-mgmt -f tce-mgmt-deploy.yaml ``` @@ -136,11 +136,12 @@ Some addons might be getting installed! Check their status by running the follow ``` I obediently follow the instructions to switch to the correct context and verify that the addons are all running: -```bash -❯ kubectl config use-context tce-mgmt-admin@tce-mgmt +```command-session +kubectl config use-context tce-mgmt-admin@tce-mgmt Switched to context "tce-mgmt-admin@tce-mgmt". - -❯ kubectl get apps -A +``` +```command-session +kubectl get apps -A NAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGE tkg-system antrea Reconcile succeeded 5m2s 11m tkg-system metrics-server Reconcile succeeded 39s 11m @@ -158,21 +159,25 @@ I've got a TCE cluster now but it's not quite ready for me to authenticate with #### Load Balancer deployment The [guide I'm following from the TCE site](https://tanzucommunityedition.io/docs/latest/vsphere-ldap-config/) assumes that I'm using NSX-ALB in my environment, but I'm not. So, [as before](/tanzu-community-edition-k8s-homelab/#deploying-kube-vip-as-a-load-balancer), I'll need to deploy [Scott Rosenberg's `kube-vip` Carvel package](https://github.com/vrabbi/tkgm-customizations): -```bash +```command git clone https://github.com/vrabbi/tkgm-customizations.git cd tkgm-customizations/carvel-packages/kube-vip-package kubectl apply -n tanzu-package-repo-global -f metadata.yml kubectl apply -n tanzu-package-repo-global -f package.yaml +``` +```command-session cat << EOF > values.yaml vip_range: 192.168.1.64-192.168.1.70 EOF +``` +```command tanzu package install kubevip -p kubevip.terasky.com -v 0.3.9 -f values.yaml ``` #### Modifying services to use the Load Balancer With the load balancer in place, I can follow the TCE instruction to modify the Pinniped and Dex services to switch from the `NodePort` type to the `LoadBalancer` type so they can be easily accessed from outside of the cluster. This process starts by creating a file called `pinniped-supervisor-svc-overlay.yaml` and pasting in the following overlay manifest: -```yaml +```yaml {linenos=true} #@ load("@ytt:overlay", "overlay") #@overlay/match by=overlay.subset({"kind": "Service", "metadata": {"name": "pinniped-supervisor", "namespace": "pinniped-supervisor"}}) --- @@ -203,8 +208,8 @@ spec: ``` This overlay will need to be inserted into the `pinniped-addon` secret which means that the contents need to be converted to a base64-encoded string: -```bash -❯ base64 -w 0 pinniped-supervisor-svc-overlay.yaml +```command-session +base64 -w 0 pinniped-supervisor-svc-overlay.yaml I0AgbG9hZCgi[...]== ``` {{% notice note "Avoid newlines" %}} @@ -212,14 +217,14 @@ The `-w 0` / `--wrap=0` argument tells `base64` to *not* wrap the encoded lines {{% /notice %}} I'll copy the resulting base64 string (which is much longer than the truncated form I'm using here), and paste it into the following command to patch the secret (which will be named after the management cluster name so replace the `tce-mgmt` part as appropriate): -```bash -❯ kubectl -n tkg-system patch secret tce-mgmt-pinniped-addon -p '{"data": {"overlays.yaml": "I0AgbG9hZCgi[...]=="}}' +```command-session +kubectl -n tkg-system patch secret tce-mgmt-pinniped-addon -p '{"data": {"overlays.yaml": "I0AgbG9hZCgi[...]=="}}' secret/tce-mgmt-pinniped-addon patched ``` I can watch as the `pinniped-supervisor` and `dexsvc` services get updated with the new service type: -```bash -❯ kubectl get svc -A -w +```command-session +kubectl get svc -A -w NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) pinniped-supervisor pinniped-supervisor NodePort 100.65.185.82 443:31234/TCP tanzu-system-auth dexsvc NodePort 100.70.238.106 5556:30167/TCP @@ -231,11 +236,13 @@ tanzu-system-auth dexsvc LoadBalancer 100.70.238.106 ``` I'll also need to restart the `pinniped-post-deploy-job` job to account for the changes I just made; that's accomplished by simply deleting the existing job. After a few minutes a new job will be spawned automagically. I'll just watch for the new job to be created: -```bash -❯ kubectl -n pinniped-supervisor delete jobs pinniped-post-deploy-job +```command-session +kubectl -n pinniped-supervisor delete jobs pinniped-post-deploy-job job.batch "pinniped-post-deploy-job" deleted +``` -❯ kubectl get jobs -A -w +```command-session +kubectl get jobs -A -w NAMESPACE NAME COMPLETIONS DURATION AGE pinniped-supervisor pinniped-post-deploy-job 0/1 0s pinniped-supervisor pinniped-post-deploy-job 0/1 0s @@ -247,7 +254,7 @@ pinniped-supervisor pinniped-post-deploy-job 1/1 9s 9s Right now, I've got all the necessary components to support LDAPS authentication with my TCE management cluster but I haven't done anything yet to actually define who should have what level of access. To do that, I'll create a `ClusterRoleBinding`. I'll toss this into a file I'll call `tanzu-admins-crb.yaml`: -```yaml +```yaml {linenos=true} kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: @@ -267,23 +274,24 @@ I have a group in Active Directory called `Tanzu-Admins` which contains a group Once applied, users within that group will be granted the `cluster-admin` role[^roles]. Let's do it: -```bash -❯ kubectl apply -f tanzu-admins-crb.yaml +```command-session +kubectl apply -f tanzu-admins-crb.yaml clusterrolebinding.rbac.authorization.k8s.io/tanzu-admins created ``` Thus far, I've been using the default administrator context to interact with the cluster. Now it's time to switch to the non-admin context: -```bash -❯ tanzu management-cluster kubeconfig get +```command-session +tanzu management-cluster kubeconfig get You can now access the cluster by running 'kubectl config use-context tanzu-cli-tce-mgmt@tce-mgmt' - -❯ kubectl config use-context tanzu-cli-tce-mgmt@tce-mgmt +``` +```command-session +kubectl config use-context tanzu-cli-tce-mgmt@tce-mgmt Switched to context "tanzu-cli-tce-mgmt@tce-mgmt". ``` After assuming the non-admin context, the next time I try to interact with the cluster it should kick off the LDAPS authentication process. It won't look like anything is happening in the terminal: -```bash -❯ kubectl get nodes +```command-session +kubectl get nodes ``` @@ -294,8 +302,8 @@ Doing so successfully will yield: ![Dex login success!](dex_login_success.png) And the `kubectl` command will return the expected details: -```bash -❯ kubectl get nodes +```command-session +kubectl get nodes NAME STATUS ROLES AGE VERSION tce-mgmt-control-plane-v8l8r Ready control-plane,master 29h v1.21.5+vmware.1 tce-mgmt-md-0-847db9ddc-5bwjs Ready 28h v1.21.5+vmware.1 @@ -318,8 +326,8 @@ Other users hoping to work with a Tanzu Community Edition cluster will also need At this point, I've only configured authentication for the management cluster - not the workload cluster. The TCE community docs cover what's needed to make this configuration available in the workload cluster as well [here](https://tanzucommunityedition.io/docs/latest/vsphere-ldap-config/#configuration-steps-on-the-workload-cluster). [As before](/tanzu-community-edition-k8s-homelab/#workload-cluster), I created the deployment YAML for the workload cluster by copying the management cluster's deployment YAML and changing the `CLUSTER_NAME` and `VSPHERE_CONTROL_PLANE_ENDPOINT` values accordingly. This time I also deleted all of the `LDAP_*` and `OIDC_*` lines, but made sure to preserve the `IDENTITY_MANAGEMENT_TYPE: ldap` one. I was then able to deploy the workload cluster with: -```bash -❯ tanzu cluster create --file tce-work-deploy.yaml +```command-session +tanzu cluster create --file tce-work-deploy.yaml Validating configuration... Creating workload cluster 'tce-work'... Waiting for cluster to be initialized... @@ -333,30 +341,33 @@ Workload cluster 'tce-work' created ``` Access the admin context: -```bash -❯ tanzu cluster kubeconfig get --admin tce-work +```command-session +tanzu cluster kubeconfig get --admin tce-work Credentials of cluster 'tce-work' have been saved You can now access the cluster by running 'kubectl config use-context tce-work-admin@tce-work' - -❯ kubectl config use-context tce-work-admin@tce-work +``` +```command-session +kubectl config use-context tce-work-admin@tce-work Switched to context "tce-work-admin@tce-work". ``` Apply the same ClusterRoleBinding from before[^crb]: -```bash -❯ kubectl apply -f tanzu-admins-crb.yaml +```command-session +kubectl apply -f tanzu-admins-crb.yaml clusterrolebinding.rbac.authorization.k8s.io/tanzu-admins created ``` And finally switch to the non-admin context and log in with my AD account: -```bash -❯ tanzu cluster kubeconfig get tce-work +```command-session +tanzu cluster kubeconfig get tce-work ℹ You can now access the cluster by running 'kubectl config use-context tanzu-cli-tce-work@tce-work' - -❯ kubectl config use-context tanzu-cli-tce-work@tce-work +``` +```command-session +kubectl config use-context tanzu-cli-tce-work@tce-work Switched to context "tanzu-cli-tce-work@tce-work". - -❯ kubectl get nodes +``` +```command-session +kubectl get nodes NAME STATUS ROLES AGE VERSION tce-work-control-plane-zts6r Ready control-plane,master 12m v1.21.5+vmware.1 tce-work-md-0-bcfdc4d79-vn9xb Ready 11m v1.21.5+vmware.1 @@ -376,8 +387,8 @@ It took me quite a bit of trial and error to get this far and (being a k8s novic #### Checking and modifying `dex` configuration I had a lot of trouble figuring out how to correctly format the `member:1.2.840.113556.1.4.1941:` attribute in the LDAPS config so that it wouldn't get split into multiple attributes due to the trailing colon - and it took me forever to discover that was even the issue. What eventually did the trick for me was learning that I could look at (and modify!) the configuration for the `dex` app with: -```bash -❯ kubectl -n tanzu-system-auth edit configmaps dex +```command-session +kubectl -n tanzu-system-auth edit configmaps dex [...] groupSearch: baseDN: OU=LAB,DC=lab,DC=bowdre,DC=net @@ -396,12 +407,13 @@ This let me make changes on the fly until I got a working configuration and then #### Reviewing `dex` logs Authentication attempts (at least on the LDAPS side of things) will show up in the logs for the `dex` pod running in the `tanzu-system-auth` namespace. This is a great place to look to see if the user isn't being found, credentials are invalid, or the groups aren't being enumerated correctly: -```bash -❯ kubectl -n tanzu-system-auth get pods +```command-session +kubectl -n tanzu-system-auth get pods NAME READY STATUS RESTARTS AGE dex-7bf4f5d4d9-k4jfl 1/1 Running 0 40h - -❯ kubectl -n tanzu-system-auth logs dex-7bf4f5d4d9-k4jfl +``` +```command-session +kubectl -n tanzu-system-auth logs dex-7bf4f5d4d9-k4jfl # no such user {"level":"info","msg":"performing ldap search OU=LAB,DC=lab,DC=bowdre,DC=net sub (\u0026(objectClass=person)(sAMAccountName=johnny))","time":"2022-03-06T22:29:57Z"} {"level":"error","msg":"ldap: no results returned for filter: \"(\u0026(objectClass=person)(sAMAccountName=johnny))\"","time":"2022-03-06T22:29:57Z"} @@ -420,7 +432,7 @@ dex-7bf4f5d4d9-k4jfl 1/1 Running 0 40h I couldn't figure out an elegant way to log out so that I could try authenticating as a different user, but I did discover that information about authenticated sessions get stored in `~/.config/tanzu/pinniped/sessions.yaml`. The sessions expired after a while but until that happens I'm able to keep on interacting with `kubectl` - and not given an option to re-authenticate even if I wanted to. So in lieu of a handy logout option, I was able to remove the cached sessions by deleting the file: -```bash +```command rm ~/.config/tanzu/pinniped/sessions.yaml ``` diff --git a/content/posts/logging-in-tce-cluster-from-new-device/index.md b/content/posts/logging-in-tce-cluster-from-new-device/index.md index b75728c..b0a4905 100644 --- a/content/posts/logging-in-tce-cluster-from-new-device/index.md +++ b/content/posts/logging-in-tce-cluster-from-new-device/index.md @@ -24,7 +24,7 @@ comment: true # Disable comment if false. When I [set up my Tanzu Community Edition environment](/tanzu-community-edition-k8s-homelab/), I did so from a Linux VM since the containerized Linux environment on my Chromebook doesn't support the `kind` bootstrap cluster used for the deployment. But now that the Kubernetes cluster is up and running, I'd like to be able to connect to it directly without the aid of a jumpbox. How do I get the appropriate cluster configuration over to my Chromebook? The Tanzu CLI actually makes that pretty easy - once I figured out the appropriate incantation. I just needed to use the `tanzu management-cluster kubeconfig get` command on my Linux VM to export the `kubeconfig` of my management (`tce-mgmt`) cluster to a file: -```shell +```command tanzu management-cluster kubeconfig get --admin --export-file tce-mgmt-kubeconfig.yaml ``` @@ -32,8 +32,8 @@ I then used `scp` to pull the file from the VM into my local Linux environment, Now I'm ready to import the configuration locally with `tanzu login` on my Chromebook: -```shell -❯ tanzu login --kubeconfig ~/projects/tanzu-homelab/tanzu-setup/tce-mgmt-kubeconfig.yaml --context tce-mgmt-admin@tce-mgmt --name tce-mgmt +```command-session +tanzu login --kubeconfig ~/projects/tanzu-homelab/tanzu-setup/tce-mgmt-kubeconfig.yaml --context tce-mgmt-admin@tce-mgmt --name tce-mgmt ✔ successfully logged in to management cluster using the kubeconfig tce-mgmt ``` @@ -42,12 +42,13 @@ Pass in the full path to the exported kubeconfig file. This will help the Tanzu {{% /notice %}} Even though that's just importing the management cluster it actually grants access to both the management and workload clusters: -```shell -❯ tanzu cluster list +```command-session +tanzu cluster list NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN tce-work default running 1/1 1/1 v1.21.2+vmware.1 dev - -❯ tanzu cluster get tce-work +``` +```command-session +tanzu cluster get tce-work NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES tce-work default running 1/1 1/1 v1.21.2+vmware.1 ℹ @@ -62,8 +63,9 @@ NAME READY SEVERITY RE └─Workers └─MachineDeployment/tce-work-md-0 └─Machine/tce-work-md-0-687444b744-crc9q True 24h - -❯ tanzu management-cluster get +``` +```command-session +tanzu management-cluster get NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES tce-mgmt tkg-system running 1/1 1/1 v1.21.2+vmware.1 management @@ -90,24 +92,26 @@ Providers: ``` And I can then tell `kubectl` about the two clusters: -```shell -❯ tanzu management-cluster kubeconfig get tce-mgmt --admin +```command-session +tanzu management-cluster kubeconfig get tce-mgmt --admin Credentials of cluster 'tce-mgmt' have been saved You can now access the cluster by running 'kubectl config use-context tce-mgmt-admin@tce-mgmt' - -❯ tanzu cluster kubeconfig get tce-work --admin +``` +```command-session +tanzu cluster kubeconfig get tce-work --admin Credentials of cluster 'tce-work' have been saved You can now access the cluster by running 'kubectl config use-context tce-work-admin@tce-work' ``` And sure enough, there are my contexts: -```shell -❯ kubectl config get-contexts +```command-session +kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE tce-mgmt-admin@tce-mgmt tce-mgmt tce-mgmt-admin * tce-work-admin@tce-work tce-work tce-work-admin - -❯ kubectl get nodes -o wide +``` +```command-session +kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME tce-work-control-plane-vc2pb Ready control-plane,master 23h v1.21.2+vmware.1 192.168.1.132 192.168.1.132 VMware Photon OS/Linux 4.19.198-1.ph3 containerd://1.4.6 tce-work-md-0-687444b744-crc9q Ready 23h v1.21.2+vmware.1 192.168.1.133 192.168.1.133 VMware Photon OS/Linux 4.19.198-1.ph3 containerd://1.4.6 diff --git a/content/posts/logging-in-to-multiple-vcenter-servers-at-once-with-powercli/index.md b/content/posts/logging-in-to-multiple-vcenter-servers-at-once-with-powercli/index.md index 7a233a2..7a87437 100644 --- a/content/posts/logging-in-to-multiple-vcenter-servers-at-once-with-powercli/index.md +++ b/content/posts/logging-in-to-multiple-vcenter-servers-at-once-with-powercli/index.md @@ -17,12 +17,12 @@ I can, and here's how I do it. ### The Script The following Powershell script will let you define a list of vCenters to be accessed, securely store your credentials for each vCenter, log in to every vCenter with a single command, and also close the connections when they're no longer needed. It's also a great starting point for any other custom functions you'd like to incorporate into your PowerCLI sessions. -```powershell +```powershell {linenos=true} # PowerCLI_Custom_Functions.ps1 # Usage: # 0) Edit $vCenterList to reference the vCenters in your environment. # 1) Call 'Update-Credentials' to create/update a ViCredentialStoreItem to securely store your username and password. -# 2) Call 'Connect-vCenters' to open simultaneously connections to all the vCenters in your environment. +# 2) Call 'Connect-vCenters' to open simultaneously connections to all the vCenters in your environment. # 3) Do PowerCLI things. # 4) Call 'Disconnect-vCenters' to cleanly close all ViServer connections because housekeeping. Import-Module VMware.PowerCLI @@ -54,6 +54,6 @@ powershell.exe -NoExit -Command ". C:\Scripts\PowerCLI_Custom_Functions.ps1" ### The Usage Now just use that shortcut to open up PowerCLI when you wish to do things. The custom functions will be loaded and waiting for you. 1. Start by running `Update-Credentials`. It will prompt you for the username+password needed to log into each vCenter listed in `$vCenterList`. These can be the same or different accounts, but you will need to enter the credentials for each vCenter since they get stored in a separate `ViCredentialStoreItem`. You'll also run this function again if you need to change the password(s) in the future. -2. Log in to all the things by running `Connect-vCenters`. +2. Log in to all the things by running `Connect-vCenters`. 3. Do your work. 4. When you're finished, be sure to call `Disconnect-vCenters` so you don't leave sessions open in the background. diff --git a/content/posts/nessus-essentials-on-tanzu-community-edition/index.md b/content/posts/nessus-essentials-on-tanzu-community-edition/index.md index f863530..71debc7 100644 --- a/content/posts/nessus-essentials-on-tanzu-community-edition/index.md +++ b/content/posts/nessus-essentials-on-tanzu-community-edition/index.md @@ -28,7 +28,7 @@ Now that VMware [has released](https://blogs.vmware.com/vsphere/2022/01/announci I start off by heading to [tenable.com/products/nessus/nessus-essentials](https://www.tenable.com/products/nessus/nessus-essentials) to register for a (free!) license key which will let me scan up to 16 hosts. I'll receive the key and download link in an email, but I'm not actually going to use that link to download the Nessus binary. I've got this shiny-and-new [Tanzu Community Edition Kubernetes cluster](/tanzu-community-edition-k8s-homelab/) that could use some more real workloads so I'll instead opt for the [Docker version](https://hub.docker.com/r/tenableofficial/nessus). Tenable provides an [example `docker-compose.yml`](https://community.tenable.com/s/article/Deploy-Nessus-docker-image-with-docker-compose) to make it easy to get started: -```yaml +```yaml {linenos=true} version: '3.1' services: @@ -46,7 +46,7 @@ services: ``` I can use that knowledge to craft something I can deploy on Kubernetes: -```yaml +```yaml {linenos=true} apiVersion: v1 kind: Service metadata: @@ -92,18 +92,18 @@ spec: containerPort: 8834 ``` -Note that I'm configuring the `LoadBalancer` to listen on port `443` and route traffic to the pod on port `8834` so that I don't have to remember to enter an oddball port number when I want to connect to the web interface. +Note that I'm configuring the `LoadBalancer` to listen on port `443` and route traffic to the pod on port `8834` so that I don't have to remember to enter an oddball port number when I want to connect to the web interface. And now I can just apply the file: -```bash -❯ kubectl apply -f nessus.yaml +```command-session +kubectl apply -f nessus.yaml service/nessus created deployment.apps/nessus created ``` I'll give it a moment or two to deploy and then check on the service to figure out what IP I need to use to connect: -```bash -❯ kubectl get svc/nessus +```command-session +kubectl get svc/nessus NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nessus LoadBalancer 100.67.16.51 192.168.1.79 443:31260/TCP 57s ``` @@ -114,7 +114,7 @@ I point my browser to `https://192.168.1.79` and see that it's a great time for Eventually that gets replaced with a login screen, where I can authenticate using the username and password specified earlier in the YAML. ![Nessus login screen](nessus_login.png) -After logging in, I get prompted to run a discovery scan to identify hosts on the network. There's a note that hosts revealed by the discovery scan will *not* count against my 16-host limit unless/until I select individual hosts for more detailed scans. That's good to know for future efforts, but for now I'm focused on just scanning my one vCenter server so I dismiss the prompt. +After logging in, I get prompted to run a discovery scan to identify hosts on the network. There's a note that hosts revealed by the discovery scan will *not* count against my 16-host limit unless/until I select individual hosts for more detailed scans. That's good to know for future efforts, but for now I'm focused on just scanning my one vCenter server so I dismiss the prompt. What I *am* interested in is scanning my vCenter for the Log4Shell vulnerability so I'll hit the friendly blue **New Scan** button at the top of the *Scans* page to create my scan. That shows me a list of *Scan Templates*: ![Scan templates](scan_templates.png) @@ -142,4 +142,4 @@ And I can drill down into the vulnerability details: This reveals a handful of findings related to old 1.x versions of Log4j (which went EOL in 2015 - yikes!) as well as [CVE-2021-44832](https://nvd.nist.gov/vuln/detail/CVE-2021-44832) Remote Code Execution vulnerability (which is resolved in Log4j 2.17.1), but the inclusion of Log4j 2.17.0 in vCenter 7.0U3c *was* sufficient to close the highly-publicized [CVE-2021-44228](https://nvd.nist.gov/vuln/detail/CVE-2021-44228) Log4Shell vulnerability. Hopefully VMware can get these other Log4j vulnerabilities taken care of in another upcoming vCenter release. -So there's that curiosity satisfied, and now I've got a handy new tool to play with in my lab. \ No newline at end of file +So there's that curiosity satisfied, and now I've got a handy new tool to play with in my lab. \ No newline at end of file diff --git a/content/posts/powercli-list-linux-vms-and-datacenter-locations/index.md b/content/posts/powercli-list-linux-vms-and-datacenter-locations/index.md index 1d10204..8cd1335 100644 --- a/content/posts/powercli-list-linux-vms-and-datacenter-locations/index.md +++ b/content/posts/powercli-list-linux-vms-and-datacenter-locations/index.md @@ -22,13 +22,13 @@ tags: comment: true # Disable comment if false. --- -I recently needed to export a list of all the Linux VMs in a rather large vSphere environment spanning multiple vCenters (and the entire globe), and I wanted to include information about which virtual datacenter each VM lived in to make it easier to map VMs to their physical location. +I recently needed to export a list of all the Linux VMs in a rather large vSphere environment spanning multiple vCenters (and the entire globe), and I wanted to include information about which virtual datacenter each VM lived in to make it easier to map VMs to their physical location. I've got a [`Connect-vCenters` function](/logging-in-to-multiple-vcenter-servers-at-once-with-powercli/) that I use to quickly log into multiple vCenters at once. That then enables me to run a single query across the entire landscape - but what query? There isn't really a direct way to get datacenter information out of the results generated by `Get-VM`; I could run an additional `Get-Datacenter` query against each returned VM object but that doesn't sound very efficient. What I came up with is using `Get-Datacenter` to enumerate each virtual datacenter, and then list the VMs matching my query within: -```powershell +```powershell {linenos=true} $linuxVms = foreach( $datacenter in ( Get-Datacenter )) { Get-Datacenter $datacenter | Get-VM | Where { $_.ExtensionData.Config.GuestFullName -notmatch "win" -and $_.Name -notmatch "vcls" } | ` Select @{ N="Datacenter";E={ $datacenter.Name }}, diff --git a/content/posts/powershell-download-web-folder-contents/index.md b/content/posts/powershell-download-web-folder-contents/index.md index 73c9f3e..7465475 100644 --- a/content/posts/powershell-download-web-folder-contents/index.md +++ b/content/posts/powershell-download-web-folder-contents/index.md @@ -23,7 +23,7 @@ comment: true # Disable comment if false. We've been working lately to use [HashiCorp Packer](https://www.packer.io/) to standardize and automate our VM template builds, and we found a need to pull in all of the contents of a specific directory on an internal web server. This would be pretty simple for Linux systems using `wget -r`, but we needed to find another solution for our Windows builds. A coworker and I cobbled together a quick PowerShell solution which will download the files within a specified web URL to a designated directory (without recreating the nested folder structure): -```powershell +```powershell {linenos=true} $outputdir = 'C:\Scripts\Download\' $url = 'https://win01.lab.bowdre.net/stuff/files/' @@ -38,7 +38,7 @@ $WebResponse.Links | Select-Object -ExpandProperty href -Skip 1 | ForEach-Object $baseUrl = $url.split('/') # ['https', '', 'win01.lab.bowdre.net', 'stuff', 'files'] $baseUrl = $baseUrl[0,2] -join '//' # 'https://win01.lab.bowdre.net' $fileUrl = '{0}{1}' -f $baseUrl.TrimEnd('/'), $_ # 'https://win01.lab.bowdre.net/stuff/files/filename.ext' - Invoke-WebRequest -Uri $fileUrl -OutFile $filePath + Invoke-WebRequest -Uri $fileUrl -OutFile $filePath } ``` diff --git a/content/posts/psa-halt-replication-before-snapshotting-linked-vcenters/index.md b/content/posts/psa-halt-replication-before-snapshotting-linked-vcenters/index.md index 457af1c..bdf6541 100644 --- a/content/posts/psa-halt-replication-before-snapshotting-linked-vcenters/index.md +++ b/content/posts/psa-halt-replication-before-snapshotting-linked-vcenters/index.md @@ -9,7 +9,7 @@ title: 'PSA: halt replication before snapshotting linked vCenters' toc: false --- -It's a good idea to take a snapshot of your virtual appliances before applying any updates, just in case. When you have multiple vCenter appliances operating in Enhanced Link Mode, though, it's important to make sure that the snapshots are in a consistent state. The vCenter `vmdird` service is responsible for continuously syncing data between the vCenters within a vSphere Single Sign-On (SSO) domain. Reverting to a snapshot where `vmdird`'s knowledge of the environment dramatically differed from that of the other vCenters could cause significant problems down the road or even result in having to rebuild a vCenter from scratch. +It's a good idea to take a snapshot of your virtual appliances before applying any updates, just in case. When you have multiple vCenter appliances operating in Enhanced Link Mode, though, it's important to make sure that the snapshots are in a consistent state. The vCenter `vmdird` service is responsible for continuously syncing data between the vCenters within a vSphere Single Sign-On (SSO) domain. Reverting to a snapshot where `vmdird`'s knowledge of the environment dramatically differed from that of the other vCenters could cause significant problems down the road or even result in having to rebuild a vCenter from scratch. *(Yes, that's a lesson I learned the hard way - and warnings about that are tragically hard to come by from what I've seen. So I'm sharing my notes so that you can avoid making the same mistake.)* @@ -20,17 +20,17 @@ Take these steps when you need to snapshot linked vCenters to avoid breaking rep 1. Open an SSH session to *all* the vCenters within the SSO domain. 2. Log in and enter `shell` to access the shell on each vCenter. 3. Verify that replication is healthy by running `/usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w [SSO_ADMIN_PASSWORD]` on each vCenter. You want to ensure that each host shows as available to all other hosts, and the message that `Partner is 0 changes behind.`: - - ```shell - root@vcsa [ ~ ]# /usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass + ```commandroot-session + /usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass Partner: vcsa2.lab.bowdre.net Host available: Yes Status available: Yes My last change number: 9346 Partner has seen my change number: 9346 Partner is 0 changes behind. - - root@vcsa2 [ ~ ]# /usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass + ``` + ```commandroot-session + /usr/lib/vmware-vmdir/bin/vdcrepadmin -f showpartnerstatus -h localhost -u administrator -w $ssoPass Partner: vcsa.lab.bowdre.net Host available: Yes Status available: Yes @@ -40,13 +40,8 @@ Take these steps when you need to snapshot linked vCenters to avoid breaking rep ``` 4. Stop `vmdird` on each vCenter by running `/bin/service-control --stop vmdird`: - ```shell - root@vcsa [ ~ ]# /bin/service-control --stop vmdird - Operation not cancellable. Please wait for it to finish... - Performing stop operation on service vmdird... - Successfully stopped service vmdird - - root@vcsa2 [ ~ ]# /bin/service-control --stop vmdird + ```commandroot-session + /bin/service-control --stop vmdird Operation not cancellable. Please wait for it to finish... Performing stop operation on service vmdird... Successfully stopped service vmdird @@ -54,13 +49,8 @@ Take these steps when you need to snapshot linked vCenters to avoid breaking rep 5. Snapshot the vCenter appliance VMs. 6. Start replication on each server again with `/bin/service-control --start vmdird`: - ```shell - root@vcsa [ ~ ]# /bin/service-control --start vmdird - Operation not cancellable. Please wait for it to finish... - Performing start operation on service vmdird... - Successfully started service vmdird - - root@vcsa2 [ ~ ]# /bin/service-control --start vmdird + ```commandroot-session + /bin/service-control --start vmdird Operation not cancellable. Please wait for it to finish... Performing start operation on service vmdird... Successfully started service vmdird diff --git a/content/posts/psa-microsoft-kb5022842-breaks-ws2022-secure-boot/index.md b/content/posts/psa-microsoft-kb5022842-breaks-ws2022-secure-boot/index.md index fec0bfb..272f3b5 100644 --- a/content/posts/psa-microsoft-kb5022842-breaks-ws2022-secure-boot/index.md +++ b/content/posts/psa-microsoft-kb5022842-breaks-ws2022-secure-boot/index.md @@ -37,7 +37,7 @@ So yeah. That's, uh, *not great.* If you've got any **Windows Server 2022** VMs with **[Secure Boot](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-898217D4-689D-4EB5-866C-888353FE241C.html)** enabled on **ESXi 6.7/7.x**, you'll want to make sure they *do not* get **KB5022842** until this problem is resolved. I put together a quick PowerCLI query to help identify impacted VMs in my environment: -```powershell +```powershell {linenos=true} $secureBoot2022VMs = foreach($datacenter in (Get-Datacenter)) { $datacenter | Get-VM | Where-Object {$_.Guest.OsFullName -Match 'Microsoft Windows Server 2022' -And $_.ExtensionData.Config.BootOptions.EfiSecureBootEnabled} | diff --git a/content/posts/recreating-hashnode-series-categories-in-jekyll-on-github-pages/index.md b/content/posts/recreating-hashnode-series-categories-in-jekyll-on-github-pages/index.md index 77d2dff..a9ddf7d 100644 --- a/content/posts/recreating-hashnode-series-categories-in-jekyll-on-github-pages/index.md +++ b/content/posts/recreating-hashnode-series-categories-in-jekyll-on-github-pages/index.md @@ -11,14 +11,14 @@ title: Recreating Hashnode Series (Categories) in Jekyll on GitHub Pages I recently [migrated this site](/virtually-potato-migrated-to-github-pages) from Hashnode to GitHub Pages, and I'm really getting into the flexibility and control that managing the content through Jekyll provides. So, naturally, after finalizing the move I got to work recreating Hashnode's "Series" feature, which lets you group posts together and highlight them as a collection. One of the things I liked about the Series setup was that I could control the order of the collected posts: my posts about [building out the vRA environment in my homelab](/series/vra8) are probably best consumed in chronological order (oldest to newest) since the newer posts build upon the groundwork laid by the older ones, while posts about my [other one-off projects](/series/projects) could really be enjoyed in any order. -I quickly realized that if I were hosting this pretty much anywhere *other* than GitHub Pages I could simply leverage the [`jekyll-archives`](https://github.com/jekyll/jekyll-archives) plugin to manage this for me - but, alas, that's not one of the [plugins supported by the platform](https://pages.github.com/versions/). I needed to come up with my own solution, and being still quite new to Jekyll (and this whole website design thing in general) it took me a bit of fumbling to get it right. +I quickly realized that if I were hosting this pretty much anywhere *other* than GitHub Pages I could simply leverage the [`jekyll-archives`](https://github.com/jekyll/jekyll-archives) plugin to manage this for me - but, alas, that's not one of the [plugins supported by the platform](https://pages.github.com/versions/). I needed to come up with my own solution, and being still quite new to Jekyll (and this whole website design thing in general) it took me a bit of fumbling to get it right. ### Reviewing the theme-provided option The Jekyll theme I'm using ([Minimal Mistakes](https://github.com/mmistakes/minimal-mistakes)) comes with [built-in support](https://mmistakes.github.io/mm-github-pages-starter/categories/) for a [category archive page](/series), which (like the [tags page](/tags)) displays all the categorized posts on a single page. Links at the top will let you jump to an appropriate anchor to start viewing the selected category, but it's not really an elegant way to display a single category. ![Posts by category](20210724-posts-by-category.png) It's a start, though, so I took a few minutes to check out how it's being generated. The category archive page lives at [`_pages/category-archive.md`](https://raw.githubusercontent.com/mmistakes/mm-github-pages-starter/master/_pages/category-archive.md): -```markdown +```markdown {linenos=true} --- title: "Posts by Category" layout: categories @@ -30,7 +30,7 @@ author_profile: true The `title` indicates what's going to be written in bold text at the top of the page, the `permalink` says that it will be accessible at `http://localhost/categories/`, and the nice little `author_profile` sidebar will appear on the left. This page then calls the `categories` layout, which is defined in [`_layouts/categories.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_layouts/categories.html): -```liquid +```liquid {linenos=true} {% raw %}--- layout: archive --- @@ -81,7 +81,7 @@ I wanted my solution to preserve the formatting that's used by the theme elsewhe ### Defining a new layout I create a new file called `_layouts/series.html` which will define how these new series pages get rendered. It starts out just like the default `categories.html` one: -```liquid +```liquid {linenos=true} {% raw %}--- layout: archive --- @@ -95,7 +95,7 @@ That `{{ content }}` block will let me define text to appear above the list of a ``` I'll be including two custom variables in the [Front Matter](https://jekyllrb.com/docs/front-matter/) for my category pages: `tag` to specify what category to filter on, and `sort_order` which will be set to `reverse` if I want the older posts up top. I'll be able to access these in the layout as `page.tag` and `page.sort_order`, respectively. So I'll go ahead and grab all the posts which are categorized with `page.tag`, and then decide whether the posts will get sorted normally or in reverse: -```liquid +```liquid {linenos=true} {% raw %}{% assign posts = site.categories[page.tag] %} {% if page.sort_order == 'reverse' %} {% assign posts = posts | reverse %} @@ -103,7 +103,7 @@ I'll be including two custom variables in the [Front Matter](https://jekyllrb.co ``` And then I'll loop through each post (in either normal or reverse order) and insert them into the rendered page: -```liquid +```liquid {linenos=true} {% raw %}
{% for post in posts %} {% include archive-single.html type=entries_layout %} @@ -112,7 +112,7 @@ And then I'll loop through each post (in either normal or reverse order) and ins ``` Putting it all together now, here's my new `_layouts/series.html` file: -```liquid +```liquid {linenos=true} {% raw %}--- layout: archive --- @@ -133,7 +133,7 @@ layout: archive ### Series pages Since I can't use a plugin to automatically generate pages for each series, I'll have to do it manually. Fortunately this is pretty easy, and I've got a limited number of categories/series to worry about. I started by making a new `_pages/series-vra8.md` and setting it up thusly: -```markdown +```markdown {linenos=true} {% raw %}--- title: "Adventures in vRealize Automation 8" layout: series @@ -154,7 +154,7 @@ Check it out [here](/series/vra8): ![vRA8 series](20210724-vra8-series.png) The other series pages will be basically the same, just without the reverse sort directive. Here's `_pages/series-tips.md`: -```markdown +```markdown {linenos=true} {% raw %}--- title: "Tips & Tricks" layout: series @@ -171,7 +171,7 @@ header: ### Changing the category permalink Just in case someone wants to look at all the post series in one place, I'll be keeping the existing category archive page around, but I'll want it to be found at `/series/` instead of `/categories/`. I'll start with going into the `_config.yml` file and changing the `category_archive` path: -```yaml +```yaml {linenos=true} category_archive: type: liquid # path: /categories/ @@ -182,7 +182,7 @@ tag_archive: ``` I'll also rename `_pages/category-archive.md` to `_pages/series-archive.md` and update its title and permalink: -```markdown +```markdown {linenos=true} {% raw %}--- title: "Posts by Series" layout: categories @@ -192,13 +192,13 @@ author_profile: true ``` ### Fixing category links in posts -The bottom of each post has a section which lists the tags and categories to which it belongs. Right now, those are still pointing to the category archive page (`/series/#vra8`) instead of the series feature pages I created (`/series/vra8`). +The bottom of each post has a section which lists the tags and categories to which it belongs. Right now, those are still pointing to the category archive page (`/series/#vra8`) instead of the series feature pages I created (`/series/vra8`). ![Old category link](20210724-old-category-link.png) -That *works* but I'd rather it reference the fancy new pages I created. Tracking down where to make that change was a bit of a journey. +That *works* but I'd rather it reference the fancy new pages I created. Tracking down where to make that change was a bit of a journey. I started with the [`_layouts/single.html`](https://github.com/mmistakes/minimal-mistakes/blob/master/_layouts/single.html) file which is the layout I'm using for individual posts. This bit near the end gave me the clue I needed: -```liquid +```liquid {linenos=true} {% raw %}