✍️Write rieview ✍️Rezension schreiben 🏷️Get Badge! 🏷️Abzeichen holen! ⚙️Edit entry ⚙️Eintrag bearbeiten 📰News 📰Neuigkeiten
Tags: kubernetes networking
Note: This is merely my experience with WD NVMe SSDs, and is not a fully objective or scientific post. You may (and most likely do) know better than me on this topic. Whatever the case is, please keep it civil and respectful if you would like to let me (and/or others) know more on this topic, for educational purposes. I would love to learn more about this. But I’m done with buying any more WD NVMe SSDs, and that’s final.
Earlier this year (2024), I had a 1TB SN550 in my laptop whose controller became very unstable at 98TB writes (it’s supposed to take at least 4x that amount of writes, if not way more) and only by disabling PCIe ASPM both in UEFI and in Linux boot arguments could I have I/O operations working for more than 100MB without crashing. This SSD had been used for 2 years, maybe a little longer. Just a little.
I came up with the idea of disabling PCIe ASPM after looking at dmesg and noticing that I could sometimes get as far as login and launching SwayWM and i3status-rs with my config intact before I/O error would occur, indicating that the flash was probably fine and it was probably a controller error (after all, it was showing controller reset
in dmesg), and in my head thinking that one of the factors that would cause the controller to change its state and thus go from working fine in boot process to crashing after boot finished, is either power or thermal related.
And that’s how my solution came down to disabling PCIe ASPM (and all PCIe and drive power management options I could find), and pointing a Noctua fan directly at the NVMe drive.
To recover my data, I moved the NVMe to a different machine (we’ll call it recovery rig), did the above mitigations on the recovery rig, and had to use ddrescue
with its map file to extract my data out in cycles (boot into live ISO with ddrescue
script in USB, start ddrescue reading from SSD, controller crash, shut down and drain power fully (every last light on the motherboard must be off), boot into live ISO, repeat).
The laptop is a 2020 Lenovo ThinkPad T14 Gen 1 AMD. The recovery rig is a custom gaming rig, with an ASUS MAXIMUS VIII HERO motherboard.
Also, because I bought 4x 1TB SN550 drives from Amazon US during an Amazon SG sale in 2021, when I went to enquire about warranty coverage for this failure in Feb 2024, I was told that it was ineligible for any warranty as the drive came from US and I was based in SG.
Last month, I bought a 2TB SN770 for use in my gaming rig, thinking “well, it’s only a gaming rig, with mainly games on it (due to SMB Folder Redirection storing my user folders on my NAS), surely, going with WD again due to it being the cheapest 2TB on Amazon SG at the time, that’s no big deal right? I don’t mind replacing 3 years down the road once it wears!”. Oh, how naive last month me was.
I dd
transferred my data from a different 1TB SN550 (the one from the gaming rig) to the 2TB SN770. Less than a week into using the SN770, I run a chkdsk
on Windows (as GParted partition move needed it), and I start getting BSoDs with error code WHEA_HARDWARE_ERROR
on 9/10 boot attempts. The actual blue screen would only last for less than a second before the machine force reboots itself, and I needed to record my monitor to catch the error code. Even after booting, it was 50/50 whether it will crash in a few hours or not.
I was unsure if the chkdsk
had screwed my data in any way, and wanted to continue gaming, and honestly wanted to take the easy way out using my 30 day Amazon return, so I swapped back to the previous 1TB SN550 for my gaming rig, put the SN770 in the recovery rig, this time to wipe its data for the Amazon return as the old gaming rig SN550 was still working fine (knocks on wood), and lo and behold: only the UEFI could detect it.
Using an EndeavourOS live ISO, its boot process included scanning all disks for LVM volumes to activate, and consistently on the first boot since the machine was previously fully power drained (power switch off and all lights off), the LVM activation would cause the controller to I/O error but /dev/nvme0 was still detected and populated in the filesystem, and then subsequent reboots without full power drain would cause the SSD to not be detected entirely after UEFI boot menu.
I was insistent in wiping my data before the return, so I thought about anything to make the controller stable, and I recalled my previous SN550 experience where I had stumbled on the idea of disabling PCIe ASPM and doing so had stabilized the controller. Thus, despite the lack of controller reset
messages in dmesg (instead going straight to Input/output error
) and the fact that this was a brand new SSD, I disabled PCIe ASPM in the UEFI and Linux boot args again, and what do you know: I can now fdisk -l /dev/nvme0n1
and dd if=/dev/urandom of=/dev/nvme0n1 bs=1M oflag=direct status=progress
.
I can finally return the drive in peace. Amazon is waiting.
The gaming rig is a custom gaming rig, with an MSI B550-A Pro that supports PCIe 4.0 NVMe SSDs. The recovery rig supports PCIe 3.0 so the NVMe drive was running at 3.0x4.
Side note: In my head, I had a theory: in the recovery rig the SN770 would run cooler as the speed is limited, as I had thought the issue was PCIe 4.0 causing thermal throttling on the SSD. The heat source that would cause the throttle in my theory would be from the RTX 2060 and the CPU heatsink being basically “above” the NVMe slot on the B550-A Pro, but despite the much better NVMe slot positioning on the MAXIMUS rig and the lack of GPU, alas it was not the case.
No more Amazon sales shall tempt me to getting another WD NVMe SSD. I want my PCIe ASPM so I can avoid paying for unnecessary electricity, and WD to me now has a bad track record for PCIe ASPM support. Especially when a failure occurs not even a week into using the drive. I’m done.
For my laptop, I got a 2TB SK Hynix P31 Gold from Amazon SG with no offers (as I had to replace the drive ASAP to continue my personal endeavours), and for the gaming rig, the recent (Oct 2024) Prime Day sales had the 2TB Samsung 990 EVO for even cheaper than what I got the WD SN770 for, so I got 2 of those, 1 as a cold spare.
I still have 2x WD SN550 drives in use today on my R730xd server, and once those give out (hopefully not soon), it won’t be replaced with another WD NVMe SSD, though I’m not sure what it’ll be yet or even what size as I plan to restructure that part of my homelab (or more realistically, home-prod).
When I wrote this, I had just finished the
dd if=/dev/urandom of=/dev/nvme0n1
on the SN770, it’s now a few days later and my money has been refunded after returning the SN770 to Amazon, so I’m now publishing this post.
I recently got a Ferris Sweep to get into the r/ErgoMechKeyboards cult, reduce shoulder strain from using a joint keyboard, and reduce finger movement and stretching when typing. I love this thing. I’m typing this very post with the Sweep on my iPad wirelessly!
⚠️ WARNING ⚠️: I am not responsible for any readers falling into rabbit holes of building keyboards or modding keymaps without restraint, instead of actually doing typing or work. Ask me how I know. You have been warned.
This is a Ferris Sweep with MX profile keys, using Akko Crystal Silver switches and the stock Anne Pro 2 keycaps I had unused. It is wireless thanks to a nice!nano clone (SuperMini nRF52840) microcontroller, and runs the ZMK firmware.
Once I had the keyboard, I started making my own keymap based on some personal preferences and habits I had already known about my keyboard usage. I started with Colemak, and fit all of the numbers and symbols and navigation keys that aren’t already covered by the default layer into the 2nd layer I named “num”. I then fit modifiers and “text editing” keys on the thumb keys and hold-taps.
While my initial keymap was mostly satisfactory, I did immediately start noticing issues with my typing habits that I never noticed on a normal ANSI stagger keyboard. Some of these could be fixed with optimising my keymap to my comforts, but most of these weren’t a keymap issue.
I had realized that while I was comfortable with typing in the Colemak layout, I was familiar with it on my Anne Pro 2 which had an ANSI stagger. This caused me to develop very bad finger habits, including:
The first issue was naturally resolved after typing for the first day. The whole reason I was interested in a split ergo was to have the rows staggered, so I just needed to follow through on that.
The other 2 issues however, were much harder to get used to, and even now on the 2nd week, I’m still tripping out over these. It’s hard to break these natural habits that form because of a bad fundamental layout such as the normal US ANSI layout with the ANSI stagger, especially when I was familiar enough with ANSI to hit 100WPM comfortably (I don’t see a point in training myself to be faster than this because I already type faster than my brain can think of what to type out).
For the spacebar, I had tried to add the spacebar to both sides of the thumb cluster, as I originally thought it was a matter of which side’s thumb I was using. Until, I added a right thumb spacebar, but it collected dust anyway, and when I went back to typing on an ANSI keyboard like my Anne Pro 2, I noticed the real issue: I was using right index specifically for hitting spacebar when I was typing fast.
I also opted to not take typing lessons or typing tests or things of the sort, and I jumped right into doing actual typing with the new Sweep layout. The reason was simple: I was never going to use over half the words from typing tests and stuff, not when most of my typing involves CLI words like git
, kubectl
, ssh
, sops
, nvim
etc, swear words when I’m chatting with friends, tech words and names like “Kubernetes”, “Ceph”, “SSD”, “NVMe”, “Cilium”, “VLAN”, “WireGuard” etc, internet lingo like “LOL”, “LMAO”, “idk”, and passwords. Oh, and the ZMK keymap itself.
I never got very far with keybr to figure out any of these issues, but within the 2nd and 3rd day of daily driving this keyboard, I had already noticed and outlined the core issue with the bottom row finger positions.
Unfortunately, these issues can’t be solved by simply editing the layout. I tried, but it breaks more than it fixes. So, I had no choice but to resolve myself to re-building the proper habits.
If you find that you’re also struggling with bad typing habits that only show on split ergos, you’re not alone. It’s a part of the process. Keep at it and you’ll find yourself more comfortable with proper typing habits the more you use the split ergo.
I thought lesser keys was more hassle. I was comfortable with layers, but that was for “non-core” or lesser used keys like media controls, and the Home/End/PageUp/PageDown keys, but putting everything except alphabets behind layers was a whole new level.
However, lesser keys meant lesser finger movements, which meant it was actually more comfortable to hit the non-alphabet keys without moving my fingers nearly as far. Combined with Colemak, I could feel the results of the reduced finger movement very quickly, and this was combined with the reduced shoulder strain now that I could spread the keyboard out further and avoid cramping up my shoulders to get in a (now not as much) comfortable typing position. More on the shoulders coming up!
Despite the struggles, I was already very happy with using the Sweep in less than 2 days. The reason was simple: I loved the freedom of having a wireless split ergo keyboard.
I could place the keyboard wherever I wanted, however I wanted, without needing to cramp my fingers and shoulders up to accomodate the limited positioning of an un-split keyboard, and I could do this with just the device I was going to type on, and the Sweep halves. Nothing more. I could already feel the difference on my shoulders by day 2.
It has to be wireless, split, and the ergo key layout, to achieve this level of freedom while retaining hand comfort. Without any one of those, I would not enjoy the keyboard at all.
See, I already have a wired Sweep with Kailh Choc Sunset switches. But, needing 2 cables made it very annoying to use with my laptop at my home desk with the cables running everywhere and restricting my positioning and angling of the halves, let alone use it on a much more portable iPad which I bring around more often than the laptop if I’m out but not at school or work.
However, I had various difficulties converting it to wireless (I have a pair of actual nice!nanos ready and unused), and I wasn’t even the one doing the soldering (I have none of the tools or experience). So, I ended up letting that keeb sit around unused for over a year.
(Also, I found QMK a little annoying to work with, and now having used ZMK, it reaffirmed my preference for ZMK over QMK.)
Recently, I finally found a local seller selling preassembled wireless Sweeps using nice!nano clones from SuperMini. I was hesitant at first as I really liked the low profile of the Choc, but after some thinking, and given how just less than a month before that I was revisiting the split ergo scene again, I decided to buy the keyboard anyway despite the MX profile, as I was really feeling the shoulder strain of a normal ANSI more than the last time I used my Choc Sweep.
Within a week, I had used my wireless MX Sweep a lot more than I had used the wired Choc Sweep in the whole period of owning it and having it assembled by a friend.
I don’t mind the Choc Sweep collecting dust for a while, because aside from switches, I had picked the absolute cheapest parts for that build, so as long as I can get my switches unsoldered, the rest of the build (including the wired Pro Micros) was less than $30. My MX Sweep looks much cleaner in terms of colors than my Choc Sweep anyway. The nice!nanos I bought and had planned to install in the Choc Sweep, as well as the Choc Sunset switches, will go to a future build, so it’s all good.
The main thing was I wanted a wireless split ergo now, and I wanted to stop giving myself excuses and procrastinating over it.
My current layout, which can be found on Git (current commit as of time of writing), currently features the following, tailored to my personal preferences and natural flow of thinking and typing:
2024-03-26: I have now added cross-hand settings to my bottom row mods, and I found that I’m liking them more than I thought I would.
I initially thought that the ZMK
hold-trigger-key-positions
settings meant that the mods actvation would only work if the next key is on the other half, but upon re-reading, that is only true for the interrupt time window forbalanced
orhold-preferred
hold-tap flavors, between initial press down andtapping-term-ms
. It doesn’t affect holding pasttapping-term-ms
, meaning you can still use same-hand mods once you hold the mod key pasttapping-term-ms
, no workflow changes there.Additionally, I’ve only ever used mods on the left half of the keyboard on normal ANSI keyboards such as on laptops, as I find the right hand mods uncomfortable to hold and use. However, upon adding the
hold-trigger-key-positions
, I found that I actually quite liked using cross-hand bottom row mods and having them activate blazing fast, and that it was significantly more comfortable using right hand mods when used in bottom row mode on split ergo keyboards, rather than where right hand mods would be located on a normal ANSI keyboard.So I now use urob’s home-row mods behavior snippet from his zmk-config’s README.md wholesale rather than tweaking it like I had before, and I love it. It’s the same behavior I wanted for same-hand mods, for those lazy moments, but much faster cross-hand without introducing finger roll issues, and much more comfortable than normal ANSI keyboards’ right hand mods. I would definitely recommend giving the entire behavior snippet a try before tweaking it (like I always do).
-
) and Enter key.
I personally found ZMK much more flexible and cleaner to use than QMK.
Checking my layout into Git was easier with ZMK thanks to them supporting user config repos compared to QMK needing to fork the whole QMK repo to add your own keymap.
ZMK also allows for much more flexible time-based behavior tuning, such as having the hold-tap tuning be behavior based rather than for the whole keyboard, and generally being more straightforward to tune and understand.
I could get my desired keymap dialed in on ZMK much quicker than in QMK, I never did iron out the time-based tuning on QMK after all, and I have a lot more time-based and mod-based keys on ZMK than QMK, in the way I wanted them configured, because of how easy it is to set them up on ZMK over QMK.
It’s also pretty neat that I can use ZMK keyboards in wired USB mode while still using only 1 cable, because the slave half will still connect to the master half using Bluetooth. So even if I have to use a cable, it’s still less infuriating than having the TRRS jack coming out the side of the keyboard and blocking certain keyboard positions I prefer to use.
(Oh, and ZMK’s USB mode can run at 1000Hz and with eager debouncing (0ms press debounce, 5ms release debounce), making it technically possible to game with the wireless Sweep’s left half connected over USB. The only issue left for me is ironing out my personal kinds on the GAME layer. It’s 17 keys on one half after all.)
For a split ergo setup specifically, I would personally use ZMK over QMK as much as I can.
If you are also looking to increase typing comfort, while also building much better and more proper typing habits, a wireless split ergo such as the Sweep is definitely worth the investment.
A diodeless single PCB keyboard like the Sweep also makes it more reassuring to bring around everywhere without needing to baby it as much, and takes lesser effort to rebuild.
A 34 key keyboard also forces the user to build better habits over a keyboard with more keys while being cheaper to get into, even if it might occasionally be a hassle to not hve enough keys (e.g. gaming).
I love my Sweep. It’s been an amazing journey to get into using it full time, and I’m looking forward to fully mastering it and continue its usage.
17.3.2024 15:35Ferris SweepEverything in this post is my perspective, opinions and experience, and does not represent anyone else but myself. Also, I will be using Markdown URL texts for non-vulnerability-related links, but please, always check what link you’re about to click on or visit, as part of general internet safety practices.
Simple but very effective self-hosted Git server (and now local browser!), with an amazing TUI and CLI over SSH.
I selfhost Soft Serve on my home Kubernetes cluster, for private projects.
Soft Serve Public Key Authentication Bypass Vulnerability when Keyboard-Interactive SSH Authentication is Enabled
My tl;dr: if public key needs client-side verification, and allow-keyless
is enabled which turns on keyboard-interactive
, public key can match an account but bypass (fail) client-side validation and successfully login on Soft Serve servers.
Find out more on the GHSA page https://github.com/advisories/GHSA-mc97-99j4-vm2v or the MITRE page https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-43809 or my GitHub issue on it https://github.com/charmbracelet/soft-serve/issues/389.
The Charm team have been professional about handling this with minimal friction during the disclosure process, with regular progress updates and swift responses as they confirmed the vulnerability and worked to find the root cause and fix it.
So here comes what I consider the most unexpected way to find a vulnerability ever. I won’t be surprised if you think it’s a fake story, but I know for sure it was real. Else I wouldn’t have got my lazy ass to write a post about it, would I?
I am aboard a public bus, and my destination is only a few stops away, within 10 minutes.
I tapped my Android phone, which has a virtual wallet card registered to the NFC, to board the bus. Found a seat, opened Termux.
“Hmm, what should I do?” Undecided, I fiddle with k9s to look at the list of pods, wondering which app to mess with. “Ah, I got it, I need to reconfigure my Soft Serve after screwing around with making sure the PostgreSQL support was fully working on the latest release.”
“Damn, HelmRelease didn’t update, lemme fix that… okay, good to go.” ssh softserve
“Now, need my YubiKey to authenticate Soft Serve’s SSH with my YubiKey’s PGP SSH key…”
“Shit, it’s my stop, doors are about to close!” I literally jump up from my seat with one foot launching me forward, one hand grabbing onto my YubiKey, and one hand holding my Android phone with OpenKeychain prompt open.
And, the moment: taps phone’s NFC card to alight bus, foreground app switches from Termux with OpenKeychain prompt to virtual wallet app, alights and switches back to Termux “Wait… why am I logged in? I don’t remember even plugging the YubiKey in…”
I kill the SSH session, ssh softserve
again, and when OpenKeychain prompt came up this time, I intentionally clicked “Cancel”. To my shock, I was logged in again, YubiKey having never been plugged in. This was no accident nor was I seeing things.
Side note: Funny enough, I had filed an issue on the bus ride back home for not being able to change
allow-keyless
andanon-access
settings when PostgreSQL was used, which led to 0.6.1 being released, and opted to properly test and report this vulnerability when I got home. I didn’t know at the time that it would be the very setting that would mitigate this vulnerability if Soft Serve users couldn’t yet update to a patched version of 0.6.2 and above. So if you look at it a certain way, I was basically the reason for all the patch versions of the 0.6.x version family being released. All done from an Android phone. Oops!
12 September 2023: I found the potential vulnerability.
15 September 2023: I reported the vulnerability in a thread on the Charm Discord server with a description and listing the environments that I used to reproduce the vulnerability. Devs acknowledged and stated they would look into it.
16 September 2023: I screen recorded a PoC video using my Android, and reported my further discovery that turning allow-keyless
off seemed to mitigate the vulnerability, amongst other details.
17-22 September 2023: Further communications between me and dev to identify the root cause.
27 September 2023: PR with patched code opened and merged.
28 September 2023: I tested that the nightly build with the patch PR merged does fix the issue, and opened the GitHub issue for public transparency.
3 October 2023: v0.6.2 patch version released with verified fix, GitHub Security Advisory (GHSA) filed and I accepted credit. GitHub issue closed as completed.
5 October 2023: CVE-2023-43809 was published.
15.11.2023 02:00CVE-2023-43809: My experience with my first CVEIn this post, I go over the repository structure that I use for my Kubernetes homelab, powered with GitOps by FluxCD.
Remember that there is no “right way” or “one size fits all” to repo structuring, define and be clear on your goals before you structure your repo.
NOTE: From here on out, all files will be in bold and is prefixed by ./ while all folders will be in bold and suffixed by / e.g. folder/ and ./file.yaml
Define desired configuration of apps and services deployed in Kubernetes cluster.
Version control all configuration changes with Git.
Automate all the things! (as much as possible)
Establish high security baseline of both Kubernetes cluster’s components, and GitOps components.
Allow for basic multi-cluster environments (production and non-production) while using monorepo
Allow for opting-in which apps/services/components to deploy per cluster, to optimize resource allocation (CPU, memory, storage, network etc).
Minimize human errors or forgetfulness by keeping duplicated code to an absolute minimum.
flux/
./kustomization.yaml
cluster-name/
Cluster specific folder
Any name is fine, e.g. prod/ and dev/
distro/
Cluster distribution’s configuration
e.g. Talos via talhelper (talos/)
config/
All Flux manifests for cluster-specific configuration state goes here
./flux-install.yaml
Flux SourceRepository pointed to Flux’s manifests OCI repo, scoped to version branch
Fluxtomization to deploy and control Flux’s components (takes over control of Flux components from Flux bootstrap)
./flux-repo.yaml
Flux SourceRepository pointed to user’s repo (e.g. JJGadgets/Biohazard, onedr0p/home-ops, 0dragosh/homelab etc), use SSH key to clone (and optionally push if configured)
“Master” Fluxtomization to deploy and control cluster-specific configuration (path points to cluster folder) and all other deployments (via cluster folder’s ./kustomization.yaml)
Includes configuration and patches for variable substitution ( ${VARIABLE}
) and SOPS secret decryption of all other deployments
Includes patches for DRYing configs
./kustomization.yaml
Native Kubernetes Kustomization to control what resources are deployed, either by Fluxtomization or kubectl apply -k
Used here for opt-in deployment of all cluster-specific configuration manifests in cluster folder.
Used here for opt-in deployment of which apps folders to deploy to cluster (from kube/deploy/).
References the apps folders to deploy like so: ../../../deploy/apps/jellyfin
Each app folder will then have its own kustomization.yaml, which opts-in deployment of the namespaces needed as well as the app’s Fluxtomization (ks.yaml).
Indirectly the “master” Fluxtomization also controls the namespaces deployed (since there’s no Fluxtomizations in between the chain of kustomization.yamls).
This allows the app’s Fluxtomization to add “master” Fluxtomization as dependency (via dependsOn).
NOTE: (about namespacing)
I choose to group apps in their own namespace, unless an app requires either multiple containers (e.g. server and web client, or microservices architecture), or 2 different apps must be in the same namespace to share Kubernetes resources or other reasons.
My structure is a janky ghetto way to implement “multi-cluster” with a very specific purpose of having production and non-production (dev/test/staging/UAT, whichever is appropriate) clusters, where the differences in the clusters mostly come down to cluster-specific variables/secrets and control of which apps are deployed, allowing the non-prod cluster(s) to be scaled smaller than the prod cluster (e.g. I don’t really need Jellyfin on both prod and staging if I don’t change any of its configuration)
Other folks like onedr0p, bjw-s, 0dragosh etc will have the “master” Fluxtomization deploy a separate “apps” Fluxtomization that points to kubernetes/apps which has kubernetes/apps/namespace/kustomization.yaml control both namespace and app Fluxtomizations (kubernetes/apps/namespace/app/ks.yaml) to deploy, which isn’t a wrong way to structure for their needs, however this means that there is no easy way to control which cluster should deploy which apps which I desire
./secrets.yaml
Cluster-specific native Kubernetes secrets, encrypted-at-rest by SOPS (Flux supports decrypting SOPS-encrypted files).
Can be used via variable substitution at deploy-time.
external-secrets is preferred over storing secrets in here to easily sync and/or consume namespace-specific secrets.
One hacky way to have cluster-specific yet namespace-specific native Kubernetes secrets (you can’t reference a secret stored in flux-system
namespace from e.g. minecraft
namespace) is to variable substitute the secret data into a kind: Secret
manifest in kube/deploy/.
./vars.yaml
Cluster-specific variables.
Can be used via variable substitution at deploy-time.
Apps, services and components to be deployed to clusters.
Contains baseline common manifests that are written to work across all clusters.
Cluster-specific configuration is achieved via variable substitution and optionally patches.
core/
apps/
User-facing apps and services.
app-name/
Replace app-name/ with app name, such as minecraft/, authentik/, or immich/.
./ns.yaml
./ks.yaml
All app-specific Fluxtomizations that deploy and control the app’s resources.
Points to either app/ and/or component-name/ folders which are explained below.
Use dependsOn to ensure that dependencies (such as storage solution and Ingress Controller) are deployed and configured before app’s resources are deployed and configured.
./repo.yaml
./kustomization.yaml
Native Kubernetes Kustomization to control what resources are deployed, either by Fluxtomization or kubectl apply -k
Used here for opting-in all YAML files but not folders, since folders are controlled by app-specific Fluxtomization which itself is controlled by “master” Fluxtomization via this and cluster’s kustomization.yaml.
app/ OR component-name/
All manifests used to deploy Kubernetes resources needed for app are placed here.
Usually if both app/ and another folder such as config/ or certs/ are present, app/ is used to deploy components which are a dependency for the other folders’ resources to be deployed.
./netpol.yaml
CiliumNetworkPolicy or Kubernetes native Network Policy
“Principle of Least Privilege” and “Default Deny” practices are followed, only allow necessary connections
Commonly allowed include Ingress Controller, intra-namespace traffic, communication with other apps’ Kubernetes services as needed.
Not all Kubernetes components need to be network accessible by every application, such as APIServer, storage solution pods/services, or Flux. Assign only if such traffic is necessary for app to function.
./hr.yaml
|
|
|
|
zfs list
:
|
|
-s [column name]
sorts zfs list
output in ascending order by selected column’s data, -S [column name]
sorts the same descending.
28.5.2019 00:00Archives
Feel free to contact me via the profile icons on the sidebar/menu, or via the links on the bottom of the page.
Hey, I’m JJ. I have a deep interest in Information Technology. In particular, I am passionate about the following domains, many of which are applied in my homelab Git repo (https://github.com/JJGadgets/Biohazard), including but not limited to:
I enjoy exploring these domains by getting my hands dirty with these technologies in my Homelab, which features networking gear and server hardware, as well as a wide range of deployed software such as Talos Linux and OPNsense.
15.3.2025 11:23About Me15.3.2025 11:23Contact Me