✍️Write rieview ✍️Rezension schreiben 🏷️Get Badge! 🏷️Abzeichen holen! ⚙️Edit entry ⚙️Eintrag bearbeiten 📰News 📰Neuigkeiten
Tags:
This article explains how to setup an SSH server intramfs unlock mechanism for a root filesystem encrypted with LUKS. I have been using this for years but never documented it!
I am used to the comfort of unlocking the partition thanks to an SSH server embedded in the initramfs. This setup has the security flaw that the initramfs could be replaced by a malicious party, but this is not something I am overly concerned about for my personal stuff so please ignore it.
All this relies on embedding an SSH server inside the initramfs:
apt update -qq
apt install dropbear-initramfs -y
The dropbear SSH server offers some configuration options through its command line:
printf '%s' 'DROPBEAR_OPTIONS="-I 600 -j -k -p 2222 -s -E -m -c /bin/cryptroot-unlock"' >>/etc/dropbear/initramfs/dropbear.conf
printf '%s' 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILOJV391WFRYgCVA2plFB8W8sF9LfbzXZOrxqaOrrwco' >/etc/dropbear/initramfs/authorized_keys
Here I set:
-I 600
: idle timeout of 10 minutes-j -k
: disable local and remote port forwarding-p 2222
: request port 2222-s
: disable password logins so that only ssh key authentication is available-E
: log to stderr (syslog is not available at this point in the boot process)-m
: disable motd-c /bin/cryptroot-unlock
: enforce a single command, no open shellA personal preference of mine is to forego the predictable network interface
naming of modern Linux. You can omit this step if you are fine with using
enp0s3
instead of the simple eth0
:
printf '%s' 'GRUB_CMDLINE_LINUX="net.ifnames=0"' >> /etc/default/grub
update-grub
Since this is a server I configure networking statically on this host. Sadly this initramfs component does not support IPv6 yet:
printf '%s' 'IP=37.187.244.19::37.187.244.1:255.255.0.0:myth:eth0' >>/etc/initramfs-tools/initramfs.conf
update-initramfs -k all -u
The syntax is a bit obtuse but here are the components of this line that are separated by colons:
37.187.244.19
: IP address of the server37.187.244.1
: Gateway of the server255.255.255.0
: Netmask of the server. Since this initramfs network
configuration system does not support gateway on link routing, the netmask
needs to be big enough to encompass your IP address and the one of your
gateway. For example for another host with IP address 51.77.159.16
and
gateway 51.77.156.1
, I need a 255.255.252.0
netmask.myth
: hostname of the servereth0
: interface to bring upWith all this done, I can reboot a server and remote unlock it without having to open the providers webui and use their clunky virtual KVM interface!
7.3.2025 00:00Unlocking a LUKS partition on boot via SSH on DebianI like to keep up with what established operating systems or Linux distributions
are doing even though I am not using them all everyday. While trying out
OpenSUSE again recently, I gave a first try ever to using systemd-networkd
.
Here is an example of how to configure your network statically with
systemd-networkd
. The quirk is that there is no way to specify two Gateway
attributes in a Network
block. Since you can have multiple Address
blocks,
this is an inconsistency that required some reading of the manual before it
clicked.
Here is what ended up working for my /etc/systemd/network/20-wired.network
:
[Match]
MACAddress=fa:16:3e:82:71:b7
[Network]
Address=37.187.244.19/32
Address=2001:41d0:401:3100::fd5/64
DNS=1.1.1.1
[Route]
Destination=0.0.0.0/0
Gateway=37.187.244.1
GatewayOnLink=yes
Metric=10
[Route]
Destination=::/0
Gateway=2001:41d0:401:3100::1
Metric=10
The GatewayOnLink
attribute might not be needed for you. I am using it because
this is an OVH box and this provider likes to reduce instances chatter by
issuing /32
netmasks on DHCP. Though I could use a more standard netmask in
this static configuration, I choose to respect their preference.
In the end systemd-networkd
works well and I have no complaints other than
this quirkiness.
I listened to the Graphics Audio adaptation of Rhythm Of War. Just like for the previous audio books, I must say it was a very immersive experience that I highly recommend. The level of realization is just even higher than before which great effect and work done to act the rhythms of the listener people’s tongue! I was taken aback by the voice of Shallan being different but got used to it.
25.2.2025 00:00Rhythm Of WarI am used to building small abstraction layers over some OpenTofu/Terraform code via YAML input files. It would be too big an ask to require people (usually developers) unfamiliar with infrastructure automation to understand the intricacies of HCL, but filling up YAML (or JSON) files is no problem at all.
In this article I will explain how I perform some measure of validation on these input files, as well as handle default values.
I am using two nested modules to abstract this validation away. I name the top
module input
and its job is to read and decode the input files, then call the
nested validation
module with them.
A simplified version of this input
module contains the following:
output "data" {
description = "The output of the validation module."
value = module.validation
}
locals {
input_path = "${path.module}/../../../inputs"
}
module "validation" {
source = "./validation/"
teams = yamldecode(file("${local.input_path}/teams.yaml"))
users = yamldecode(file("${local.input_path}/users.yaml"))
}
There is a single output to expose the validated data. The input_path
should
obviously point to where your inputs
data lives.
The validation
module does the heavy lifting of validating the input, handling
default values and mangling data in necessary ways. Here is a simplified
example:
output "aws_iam_users" {
description = "The aws IAM users data."
value = { for user, info in var.users :
user => info if info.admin.aws
}
}
output "users" {
description = "The users data."
value = var.users
}
variable "users" {
description = "The yaml decoded contents of the users input file."
nullable = false
type = map(object({
admin = optional(object({
aws = optional(bool, false)
github = optional(bool, false)
}), {})
email = string
github = optional(string, null)
}))
validation {
condition = alltrue([for _, info in var.users :
endswith(info.email, "@adyxax.org")
])
error_message = "A user's email must be for the @adyxax.org domain."
}
}
Here I have two outputs: one that mangles the input data a bit to filter AWS admin users, and another that simply returns the input data augmented by the default values. I added a validation block that checks that every users’ email address is on the proper domain.
Using this input module is as simple as:
module "input" {
source = "../modules/input/"
}
With this, you can then do something with module.input.data.users
or
module.input.data.aws_iam_users
. A common debugging step can be to run
OpenTofu or Terraform with the console
command and inspect the resulting input
data.
The main limitation of this validation system is that invalid (or misspelled) keys in the original input file are simply ignored by OpenTofu/Terraform. I did not find a way around it with just terraform which is frustrating!
A solution to this particular need that relies on outside tooling is to perform JSON schema or YAML schema validation. This solves the problem and runs nicely in a CI environment.
This pattern is really useful, use it without moderation!
11.2.2025 00:00Validating input files with OpenTofu/TerraformThe latest release of OpenTofu came with a much anticipated feature: provider
iteration with for_each
!
My code was already no longer compatible with terraform since OpenTofu added the much needed variable interpolation in provider blocks feature, so I was more than ready to take the plunge.
A good example will be to rewrite the lengthy code from my Securing AWS default vpcs article a few months ago. It now looks like:
locals {
aws_regions = toset([
"ap-northeast-1",
"ap-northeast-2",
"ap-northeast-3",
"ap-south-1",
"ap-southeast-1",
"ap-southeast-2",
"ca-central-1",
"eu-central-1",
"eu-north-1",
"eu-west-1",
"eu-west-2",
"eu-west-3",
"sa-east-1",
"us-east-1",
"us-east-2",
"us-west-1",
"us-west-2",
])
}
provider "aws" {
alias = "all"
default_tags { tags = { "managed-by" = "tofu" } }
for_each = concat(local.aws_regions)
profile = "common"
region = each.key
}
module "default" {
for_each = local.aws_regions
providers = { aws = aws.all[each.key] }
source = "../modules/defaults"
}
Note the use of the concat()
function in the for_each
definition of the
providers block. This is needed to silence a warning that tells you it is a bad
idea to iterate through your providers using the same expression in provider
definitions and module definitions.
Though I understand the reason (to allow for resources destructions when the list we are iterating on changes), it is not a bother for me in this case.
The main limitation at the moment is the inability to pass down the whole
aws.all
to a module. This leads to code that repeats itself a bit, but it is
still better than before.
For example, when creating resources for multiple aws accounts, a common pattern
is to have your DNS manged in a specific account (for me it is named core
)
that you need to pass around. Let’s say you have another account named common
with for example monitoring stuff and here is how some module invocation can
look like:
module "base" {
providers = {
aws = aws.all["${var.environment}_${var.region}"]
aws.common = aws.all["common_us-east-1"]
aws.core = aws.all["core_us-east-1"]
}
source = "../modules/base"
...
}
It would be nice to be able to just pass down aws.all, but alas we cannot yet.
Just be warned that you cannot go too crazy with this mechanism. I tried to iterate through a cross-product of all AWS regions and a dozen AWS accounts and it does not go well: OpenTofu slows down to a crawl and it starts taking a dozen minutes just to instantiate all providers in a folder, before planning any resources!
This is because providers are instantiated as separate processes that OpenTofu then talks to. This model does not scale that well (and consumes a fair bit of memory), as least for the time being.
I absolutely love this new feature!
25.1.2025 00:00Opentofu provider iteration with `for_each`The Nvidia device plugin for kubernetes is a daemonset that allows you to exploit GPUs in a kubernetes cluster. In particular, it allows you to request a number of GPUs from the pods’ spec.
This article presents the device plugin’s installation and usage on AWS EKS.
The main pre-requisite is that your nodes have the nvidia drivers and container toolkit installed. On EKS, this means using an AL2_x86_64_GPU
AMI.
The device plugin daemonset can be setup using the following OpenTofu/terraform code, which is adapted from https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/master/deployments/static/nvidia-device-plugin.yml :
resource "kubernetes_daemon_set_v1" "nvidia-k8s-device-plugin" {
metadata {
name = "nvidia-device-plugin"
namespace = "kube-system"
}
spec {
selector {
match_labels = {
name = "nvidia-device-plugin"
}
}
strategy {
type = "RollingUpdate"
}
template {
metadata {
annotations = {
"adyxax.org/promtail" = true
}
labels = {
name = "nvidia-device-plugin"
}
}
spec {
container {
image = format(
"%s:%s",
local.versions["nvidia-k8s-device-plugin"].image,
local.versions["nvidia-k8s-device-plugin"].tag,
)
name = "nvidia-device-plugin-ctr"
security_context {
allow_privilege_escalation = false
capabilities {
drop = ["ALL"]
}
}
volume_mount {
mount_path = "/var/lib/kubelet/device-plugins"
name = "data"
}
}
node_selector = {
adyxax-gpu-node = true
}
priority_class_name = "system-node-critical"
toleration {
effect = "NoSchedule"
key = "nvidia.com/gpu"
operator = "Exists"
}
volume {
host_path {
path = "/var/lib/kubelet/device-plugins"
}
name = "data"
}
}
}
}
wait_for_rollout = false
}
I add a node_selector
to only provision the device plugin on nodes that need it, since I am also running non GPU nodes in my clusters.
To grant GPU access to a pod, you set a resources limit and request. It is important that you set both since GPUs are a non overcommittable resource on kubernetes. When you request some you also need to set an equal limit.
resources:
limits:
nvidia.com/gpu: 8
requests:
nvidia.com/gpu: 8
Note that all GPUs are detected as equal by the device plugin. If your cluster mixes nodes with different GPU hardware configurations, you will need to use taints and tolerations to make sure your workloads are assigned correctly.
It works well as is. I have not played with neither GPU time slicing nor MPS.
19.1.2025 00:00Deploy the Nvidia device plugin for kubernetesAWS capacity blocks for machine learning are a short term GPU instance reservation mechanism. It is somewhat recent and has some rough edges when used via OpenTofu/terraform because of the incomplete documentation. I had to figure out things the hard way a few months ago, here they are.
When you reserve a capacity block, you get a capacity reservation id. You need to feed this id to an EC2 launch template. The twist is that you also need to use a specific instance market option not specified in the AWS provider’s documentation for this to work:
resource "aws_launch_template" "main" {
capacity_reservation_specification {
capacity_reservation_target {
capacity_reservation_id = "cr-XXXXXX"
}
}
instance_market_options {
market_type = "capacity-block"
}
instance_type = "p4d.24xlarge"
# soc2: IMDSv2 for all ec2 instances
metadata_options {
http_endpoint = "enabled"
http_put_response_hop_limit = 1
http_tokens = "required"
instance_metadata_tags = "enabled"
}
name = "imdsv2-${var.name}"
}
In order to use a capacity block reservation for a kubernetes node group, you need to:
resource "aws_eks_node_group" "main" {
for_each = var.node_groups
ami_type = each.value.gpu ? "AL2_x86_64_GPU" : null
capacity_type = each.value.capacity_reservation != null ? "CAPACITY_BLOCK" : null
cluster_name = aws_eks_cluster.main.name
labels = {
adyxax-gpu-node = each.value.gpu
adyxax-node-group = each.key
}
launch_template {
name = aws_launch_template.imdsv2[each.key].name
version = aws_launch_template.imdsv2[each.key].latest_version
}
node_group_name = each.key
node_role_arn = aws_iam_role.nodes.arn
scaling_config {
desired_size = each.value.scaling.min
max_size = each.value.scaling.max
min_size = each.value.scaling.min
}
subnet_ids = local.subnet_ids
tags = {
"k8s.io/cluster-autoscaler/enabled" = each.value.capacity_reservation == null
}
update_config {
max_unavailable = 1
}
version = local.versions.aws-eks.nodes-version
depends_on = [
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
aws_iam_role_policy_attachment.AmazonEKSCNIPolicy,
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
]
lifecycle {
create_before_destroy = true
ignore_changes = [scaling_config[0].desired_size]
}
}
There is a terraform resource to provision the capacity blocks themselves that might be of interest, but I did not attempt to use it seriously. Capacity blocks are never available right when you create them, you need to book them days (sometimes weeks) in advance. Though OpenTofu/terraform has some basic date and time handling functions I could use to work around this, my needs are too sparse to go through the hassle of automating this.
4.1.2025 00:00AWS capacity blocks with OpenTofu/terraformI am migrating several services from a NixOS server (dalinar.adyxax.org) to a Debian server (lore.adyxax.org). Here is how I performed the operation for my self hosted vaultwarden.
The meta/main.yaml
contains the role dependencies:
---
dependencies:
- role: 'borg'
- role: 'nginx'
- role: 'podman'
- role: 'postgresql'
The tasks/main.yaml
just creates a data directory and fetches the admin secret token from a terraform state. All the heavy lifting is then done by calling other roles:
---
- name: 'Make vaultwarden data directory'
file:
path: '/srv/vaultwarden'
owner: 'root'
group: 'root'
mode: '0750'
state: 'directory'
- include_role:
name: 'postgresql'
tasks_from: 'database'
vars:
postgresql:
name: 'vaultwarden'
- name: 'Load the tofu state to read the database encryption key'
include_vars:
file: '../tofu/04-apps/terraform.tfstate' # TODO use my http backend instead
name: 'tofu_state_vaultwarden'
- set_fact:
vaultwarden_argon2_token: "{{ tofu_state_vaultwarden | json_query(\"resources[?type=='random_password'&&name=='vaultwarden_argon2_token'].instances[0].attributes.result\") }}"
- include_role:
name: 'podman'
tasks_from: 'container'
vars:
container:
name: 'vaultwarden'
env_vars:
- name: 'ADMIN_TOKEN'
value: "'{{ vaultwarden_argon2_token[0] }}'"
- name: 'DATABASE_MAX_CONNS'
value: '2'
- name: 'DATABASE_URL'
value: 'postgres://vaultwarden:{{ ansible_local.postgresql_vaultwarden.password }}@10.88.0.1/vaultwarden?sslmode=disable'
image: '{{ versions.vaultwarden.image }}:{{ versions.vaultwarden.tag }}'
publishs:
- container_port: '80'
host_port: '8083'
ip: '127.0.0.1'
volumes:
- dest: '/data'
src: '/srv/vaultwarden'
- include_role:
name: 'nginx'
tasks_from: 'vhost'
vars:
vhost:
name: 'vaultwarden'
path: 'roles/vaultwarden/files/nginx-vhost.conf'
- include_role:
name: 'borg'
tasks_from: 'client'
vars:
client:
jobs:
- name: 'data'
paths:
- '/srv/vaultwarden'
- name: 'postgres'
command_to_pipe: "su - postgres -c '/usr/bin/pg_dump -b -c -C -d vaultwarden'"
name: 'vaultwarden'
server: '{{ vaultwarden.borg }}'
There is only the nginx vhost file, fairly straightforward:
###############################################################################
# \_o< WARNING : This file is being managed by ansible! >o_/ #
# ~~~~ ~~~~ #
###############################################################################
server {
listen 80;
listen [::]:80;
server_name pass.adyxax.org;
location / {
return 308 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name pass.adyxax.org;
location / {
proxy_pass http://127.0.0.1:8083;
}
ssl_certificate adyxax.org.fullchain;
ssl_certificate_key adyxax.org.key;
}
The first step is to deploy this new configuration to the server:
make run limit=lore.adyxax.org tags=vaultwarden
After that I manually backup the vaultwarden data with:
ssh root@dalinar.adyxax.org systemctl stop podman-vaultwarden
ssh root@dalinar.adyxax.org /run/current-system/sw/bin/pg_dump -b -c -C -h localhost -U vaultwarden -d vaultwarden > /tmp/vaultwarden.sql
ssh root@dalinar.adyxax.org tar czf /tmp/vaultwarden.tar.gz /srv/vaultwarden/
I retrieve then migrate these backups with:
scp root@dalinar.adyxax.org:/tmp/vaultwarden.{sql,tar.gz} .
ssh root@dalinar.adyxax.org rm vaultwarden.{sql,tar.gz}
scp vaultwarden.{sql,tar.gz} root@lore.adyxax.org:
rm vaultwarden.{sql,tar.gz}
On the new server, restoring the backup is done with:
ssh root@lore.adyxax.org systemctl stop podman-vaultwarden
ssh root@lore.adyxax.org "cat vaultwarden.sql | su - postgres -c 'psql'"
ssh root@lore.adyxax.org tar -xzf vaultwarden.tar.gz -C /srv/vaultwarden/
ssh root@lore.adyxax.org rm vaultwarden.{sql,tar.gz}
ssh root@lore.adyxax.org systemctl start podman-vaultwarden
I then test the new server by setting the record in my /etc/hosts
file. Since it all works well, I rollback my change to /etc/hosts
and update the DNS record using OpenTofu.
I did all this in early October and performed several vaultwarden upgrades since then. It all works well!
31.12.2024 00:00Migrating vaultwarden from nixos to DebianI listened to the Graphics Audio adaptation of Dawnshard. Just like for the previous audio books, I must say it was a great experience that I highly recommend. The level of realization is just as good, and they kept the same actors! They changed the narrator again though, but though I still prefer the one from the first stormlight books I also like this one.:w
5.12.2024 00:00DawnshardI did the advent of code 2023 in haskell, it was a fun experience as always! Why writing about this now? Because I just finished the last puzzle as a warm up for the upcoming year’s puzzles!
I did the first 11 puzzles on time last December but the “one puzzle a day” schedule is a bit much when life happens around you. I therefore took a break and did a few more puzzles in mid January. Upon reaching the 17th puzzle (the shortest paths with weird constraints puzzle) I took another break until June were I pushed through until Day 24th (the hailstorm that forces you to do math). I took another break only to pick it up this week. I just finished days 24 and 25, completing the set!
This article explains some patterns I used for solving the puzzles. I always use megaparsec to parse the input, even when it is overkill… just because I find it so fun to work with.
Relying on megaparsec payed off from day 2 where you need to parse this beauty:
Game 1: 3 blue, 4 red; 1 red, 2 green, 6 blue; 2 green
Game 2: 1 blue, 2 green; 3 green, 4 blue, 1 red; 1 green, 1 blue
Game 3: 8 green, 6 blue, 20 red; 5 blue, 4 red, 13 green; 5 green, 1 red
Game 4: 1 green, 3 red, 6 blue; 3 green, 6 red; 3 green, 15 blue, 14 red
Game 5: 6 red, 1 blue, 3 green; 2 blue, 1 red, 2 green
You got an ID, then some draws separated by ;
. A draw is a set of colors given out of order, which I see as a clear cut case of running permutations:
data Draw = Draw Int Int Int deriving (Eq, Show)
data Game = Game Int [Draw] deriving Show
type Input = [Game]
type Parser = Parsec Void String
parseColor :: String -> Parser Int
parseColor color = read <$> try (some digitChar <* hspace <* string color <* optional (string ", "))
parseDraw :: Parser Draw
parseDraw = do
(blue, green, red) <- runPermutation $
(,,) <$> toPermutationWithDefault 0 (parseColor "blue")
<*> toPermutationWithDefault 0 (parseColor "green")
<*> toPermutationWithDefault 0 (parseColor "red")
void . optional $ string "; "
return $ Draw blue green red
parseGame :: Parser Game
parseGame = do
id <- read <$> (string "Game " *> some digitChar <* optional (string ": "))
Game id <$> someTill parseDraw eol
parseInput' :: Parser Input
parseInput' = some parseGame <* eof
I also got better at understanding functors and applicatives, using them to simplify mapping things to types. For example on day 12 you got a map that looks like:
???.### 1,1,3
.??..??...?##. 1,1,3
?#?#?#?#?#?#?#? 1,3,1,6
????.#...#... 4,1,1
????.######..#####. 1,6,5
?###???????? 3,2,1
Here is how I parsed it:
data Tile = Broken | Operational | Unknown deriving Eq
instance Show Tile where
show Broken = "#"
show Operational = "."
show Unknown = "?"
data Row = Row [Tile] [Int] deriving Show
type Input = [Row]
type Parser = Parsec Void String
parseNumber :: Parser Int
parseNumber = read <$> some digitChar <* optional (char ',')
parseTile :: Parser Tile
parseTile = char '#' $> Broken
<|> char '.' $> Operational
<|> char '?' $> Unknown
parseRow :: Parser Row
parseRow = Row <$> some parseTile <* space
<*> some parseNumber <* eol
parseInput' :: Parser Input
parseInput' = some parseRow <* eof
The functor usage is very useful for parts where you want to parse one thing but return another thing like:
char '#' $> Broken
I also used it to parse the integers from the digit characters without any intermediate step, which I find really clean and powerful:
parseNumber = read <$> some digitChar <* optional (char ',')
The applicative (which is an extension of functors but for types instead of functions) allows this clever structure:
parseRow :: Parser Row
parseRow = Row <$> some parseTile <* space
<*> some parseNumber <* eol
Parsing also did all the heavy lifting on day 7 where you need to rank poker like hands. Your input is a list of hands of five cards and a bid:
32T3K 765
T55J5 684
KK677 28
KTJJT 220
QQQJA 483
Here is the data structure I settled on:
data Card = Two | Three | Four | Five | Six | Seven | Eight | Nine | T | J | Q | K | A deriving (Eq, Ord)
data Rank = HighCard
| Pair
| Pairs
| Brelan
| FullHouse
| Quartet
| Quintet
deriving (Eq, Ord, Show)
data Hand = Hand Rank [Card] Int deriving (Eq, Show)
compareCards :: [Card] -> [Card] -> Ordering
compareCards (x:xs) (y:ys) | x == y = compareCards xs ys
| otherwise = x `compare` y
instance Ord Hand where
(Hand a x _) `compare` (Hand b y _) | a == b = compareCards x y
| otherwise = a `compare` b
type Input = [Hand]
The hard part of the puzzle is to rank hands, which I decided to compute while parsing because why not!
parseCard :: Parser Card
parseCard = char '2' $> Two
<|> char '3' $> Three
<|> char '4' $> Four
<|> char '5' $> Five
<|> char '6' $> Six
<|> char '7' $> Seven
<|> char '8' $> Eight
<|> char '9' $> Nine
<|> char 'T' $> T
<|> char 'J' $> J
<|> char 'Q' $> Q
<|> char 'K' $> K
<|> char 'A' $> A
evalRank :: [Card] -> Rank
evalRank n@(a:b:c:d:e:_) | not (a<=b && b<=c && c<=d && d<=e) = evalRank $ L.sort n
| a==b && b==c && c==d && d==e = Quintet
| (a==b && b==c && c==d) || (b==c && c==d && d==e) = Quartet
| a==b && (b==c || c==d) && d==e = FullHouse
| (a==b && b==c) || (b==c && c==d) || (c==d && d==e) = Brelan
| (a==b && (c==d || d==e)) || (b==c && d==e) = Pairs
| a==b || b==c || c==d || d==e = Pair
| otherwise = HighCard
parseHand :: Parser Hand
parseHand = do
cards <- some parseCard <* char ' '
bid <- read <$> (some digitChar <* eol)
return $ Hand (evalRank cards) cards bid
parseInput' :: Parser Input
parseInput' = some parseHand <* eof
With all the heavy lifting already done, computing the solution for part1 of the puzzle is simply:
compute :: Input -> Int
compute = sum . zipWith (*) [1..] . map (\(Hand _ _ bid) -> bid) . L.sort
This was particularly interesting for part 2 where there is a twist: J
cards are now jokers, so you need to handle this as a wildcard when ranking hands! After raking my brain for a while, I decided to make the type system bear the complexity by adjusting the data structure to this:
data Card = J | Two | Three | Four | Five | Six | Seven | Eight | Nine | T | Q | K | A
instance Eq Card where
J == _ = True
_ == J = True
a == b = show a == show b
instance Ord Card where
a `compare` b = show a `compare` show b
a <= b = show a <= show b
With this change, I could now rank the hands with:
evalRank :: [Card] -> Rank
evalRank [J, J, J, J, _] = Quintet
evalRank [J, J, J, d, e] | d==e = Quintet
| otherwise = Quartet
evalRank [J, J, c, d, e] | c==d && d==e = Quintet
| c==d || d==e = Quartet
| otherwise = Brelan
evalRank [J, b, c, d, e] | b==c && c==d && d==e = Quintet
| (b==c || d==e) && c==d = Quartet
| b==c && d==e = FullHouse
| b==c || c==d || d==e = Brelan
| otherwise = Pair
evalRank [a, b, c, d, e] | a==b && a==c && a==d && a==e = Quintet
| (a==b && a==c && a==d) || (b==c && b==d && b==e) = Quartet
| a==b && (b==c || c==d) && d==e = FullHouse
| (a==b && b==c) || (b==c && c==d) || (c==d && d==e) = Brelan
| (a==b && (c==d || d==e)) || (b==c && d==e) = Pairs
| a==b || b==c || c==d || d==e = Pair
| otherwise = HighCard
I love haskell, I wish I could use it daily and not just for seasonal puzzles.
22.11.2024 00:00Advent of code 2023 in haskellI am migrating several services from a NixOS server (myth.adyxax.org) to a Debian server (lore.adyxax.org). Here is how I performed the operation for my self hosted privatebin served from paste.adyxax.org.
The meta/main.yaml
contains the role dependencies:
---
dependencies:
- role: 'borg'
- role: 'nginx'
- role: 'podman'
The tasks/main.yaml
file only creates a data directory and drops a configuration file. All the heavy lifting is then done by calling other roles:
---
- name: 'Make privatebin data directory'
file:
path: '/srv/privatebin'
owner: '65534'
group: '65534'
mode: '0750'
state: 'directory'
- name: 'Deploy privatebin configuration file'
copy:
src: 'privatebin.conf.php'
dest: '/etc/'
owner: 'root'
mode: '0444'
notify: 'restart privatebin'
- include_role:
name: 'podman'
tasks_from: 'container'
vars:
container:
cmd: ['--config-path', '/srv/cfg/conf.php']
name: 'privatebin'
env_vars:
- name: 'PHP_TZ'
value: 'Europe/Paris'
- name: 'TZ'
value: 'Europe/Paris'
image: '{{ versions.privatebin.image }}:{{ versions.privatebin.tag }}'
publishs:
- container_port: '8080'
host_port: '8082'
ip: '127.0.0.1'
volumes:
- dest: '/srv/cfg/conf.php:ro'
src: '/etc/privatebin.conf.php'
- dest: '/srv/data'
src: '/srv/privatebin'
- include_role:
name: 'nginx'
tasks_from: 'vhost'
vars:
vhost:
name: 'privatebin'
path: 'roles/paste.adyxax.org/files/nginx-vhost.conf'
- include_role:
name: 'borg'
tasks_from: 'client'
vars:
client:
jobs:
- name: 'data'
paths:
- '/srv/privatebin'
name: 'privatebin'
server: '{{ paste_adyxax_org.borg }}'
There is a single handler:
---
- name: 'restart privatebin'
service:
name: 'podman-privatebin'
state: 'restarted'
First there is my privatebin configuration, fairly simple:
;###############################################################################
;# \_o< WARNING : This file is being managed by ansible! >o_/ #
;# ~~~~ ~~~~ #
;###############################################################################
[main]
discussion = true
opendiscussion = false
password = true
fileupload = true
burnafterreadingselected = false
defaultformatter = "plaintext"
sizelimit = 10000000
template = "bootstrap"
notice = "Note: This is a personal sharing service: Data may be deleted anytime. Don't share illegal, unethical or morally reprehensible content."
languageselection = true
zerobincompatibility = false
[expire]
default = "1week"
[expire_options]
5min = 300
10min = 600
1hour = 3600
1day = 86400
1week = 604800
1month = 2592000
1year = 31536000
[formatter_options]
plaintext = "Plain Text"
syntaxhighlighting = "Source Code"
markdown = "Markdown"
[traffic]
limit = 10
header = "X_FORWARDED_FOR"
dir = PATH "data"
[purge]
limit = 300
batchsize = 10
dir = PATH "data"
[model]
class = Filesystem
[model_options]
dir = PATH "data"
Then the nginx vhost file, fairly straightforward too:
###############################################################################
# \_o< WARNING : This file is being managed by ansible! >o_/ #
# ~~~~ ~~~~ #
###############################################################################
server {
listen 80;
listen [::]:80;
server_name paste.adyxax.org;
location / {
return 308 https://$server_name$request_uri;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name paste.adyxax.org;
location / {
proxy_pass http://127.0.0.1:8082;
}
ssl_certificate adyxax.org.fullchain;
ssl_certificate_key adyxax.org.key;
}
The first step is to deploy this new configuration to the server:
make run limit=lore.adyxax.org tags=paste.adyxax.org
After that I log in and manually migrate the privatebin data folder. On the old server I make a backup with:
systemctl stop podman-privatebin
tar czf /tmp/privatebin.tar.gz /srv/privatebin/
I retrieve this backup on my laptop and send it to the new server with:
scp root@myth.adyxax.org:/tmp/privatebin.tar.gz .
scp privatebin.tar.gz root@lore.adyxax.org:
On the new server, I restore the backup with:
systemctl stop podman-privatebin
tar -xzf privatebin.tar.gz -C /srv/privatebin/
chown -R 65534:65534 /srv/privatebin
chmod -R u=rwX /srv/privatebin
systemctl start podman-privatebin
I then test the new server by setting the record in my /etc/hosts
file. Since all works well, I rollback my change to /etc/hosts
and update the DNS record using OpenTofu. I then clean up by running this on my laptop:
rm privatebin.tar.gz
ssh root@myth.adyxax.org 'rm /tmp/privatebin.tar.gz'
ssh root@lore.adyxax.org 'rm privatebin.tar.gz'
I did all this in early October, my backlog of blog articles is only growing!
17.11.2024 00:00Migrating privatebin from NixOS to DebianIt was nice to properly read a book after so many audiobooks. This story was short, fun and refreshing: I recommend reading it.
17.11.2024 00:00The Frugal Wizard’s Handbook For Surviving Medieval EnglandBefore succumbing to nixos, I had was running all my containers on k3s. This time I am migrating things to podman and trying to achieve a lighter setup. This article presents the ansible role I wrote to manage podman containers.
The main tasks file setups podman and the required network configurations with:
---
- name: 'Run OS specific tasks for the podman role'
include_tasks: '{{ ansible_distribution }}.yaml'
- name: 'Make podman scripts directory'
file:
path: '/etc/podman'
mode: '0700'
owner: 'root'
state: 'directory'
- name: 'Deploy podman configuration files'
copy:
src: 'cni-podman0'
dest: '/etc/network/interfaces.d/'
owner: 'root'
mode: '444'
My OS specific task file Debian.yaml
looks like this:
---
- name: 'Install podman dependencies'
ansible.builtin.apt:
name:
- 'buildah'
- 'podman'
- 'rootlesskit'
- 'slirp4netns'
- name: 'Deploy podman configuration files'
copy:
src: 'podman-bridge.json'
dest: '/etc/cni/net.d/87-podman-bridge.conflist'
owner: 'root'
mode: '444'
The entrypoint tasks for this role is the container.yaml
task file:
---
# Inputs:
# container:
# cmd: optional(list(string))
# env_vars: list(env_var)
# image: string
# name: string
# publishs: list(publish)
# volumes: list(volume)
# With:
# env_var:
# name: string
# value: string
# publish:
# container_port: string
# host_port: string
# ip: string
# volume:
# dest: string
# src: string
- name: 'Deploy podman systemd service for {{ container.name }}'
template:
src: 'container.service'
dest: '/etc/systemd/system/podman-{{ container.name }}.service'
owner: 'root'
mode: '0444'
notify: 'systemctl daemon-reload'
- name: 'Deploy podman scripts for {{ container.name }}'
template:
src: 'container-{{ item }}.sh'
dest: '/etc/podman/{{ container.name }}-{{ item }}.sh'
owner: 'root'
mode: '0500'
register: 'deploy_podman_scripts'
loop:
- 'start'
- 'stop'
- name: 'Restart podman container {{ container.name }}'
shell:
cmd: "systemctl restart podman-{{ container.name }}"
when: 'deploy_podman_scripts.changed'
- name: 'Start podman container {{ container.name }} and activate it on boot'
service:
name: 'podman-{{ container.name }}'
enabled: true
state: 'started'
There is a single main.yaml
handler:
---
- name: 'systemctl daemon-reload'
shell:
cmd: 'systemctl daemon-reload'
Here is the cni-podman0
I deploy on Debian hosts. It is required for the bridge to be up on boot so that other services can bind ports on it. Without this, the bridge would only come up when the first container starts which is too late in the boot process.
###############################################################################
# \_o< WARNING : This file is being managed by ansible! >o_/ #
# ~~~~ ~~~~ #
###############################################################################
auto cni-podman0
iface cni-podman0 inet static
address 10.88.0.1/16
pre-up brctl addbr cni-podman0
post-down brctl delbr cni-podman0
Here is the JSON cni bridge configuration file I use, customized to add IPv6 support:
{
"cniVersion": "0.4.0",
"name": "podman",
"plugins": [
{
"type": "bridge",
"bridge": "cni-podman0",
"isGateway": true,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"routes": [
{
"dst": "0.0.0.0/0"
}, {
"dst": "::/0"
}
],
"ranges": [
[{
"subnet": "10.88.0.0/16",
"gateway": "10.88.0.1"
}], [{
"subnet": "fd42::/48",
"gateway": "fd42::1"
}]
]
}
}, {
"type": "portmap",
"capabilities": {
"portMappings": true
}
}, {
"type": "firewall"
}, {
"type": "tuning"
}
]
}
Here is the jinja templated start bash script:
#!/usr/bin/env bash
###############################################################################
# \_o< WARNING : This file is being managed by ansible! >o_/ #
# ~~~~ ~~~~ #
###############################################################################
set -euo pipefail
podman rm -f {{ container.name }} || true
rm -f /run/podman-{{ container.name }}.ctr-id
exec podman run \
--rm \
--name={{ container.name }} \
--log-driver=journald \
--cidfile=/run/podman-{{ container.name }}.ctr-id \
--cgroups=no-conmon \
--sdnotify=conmon \
-d \
{% for env_var in container.env_vars | default([]) %}
-e {{ env_var.name }}={{ env_var.value }} \
{% endfor %}
{% for publish in container.publishs | default([]) %}
-p {{ publish.ip }}:{{ publish.host_port }}:{{ publish.container_port }} \
{% endfor %}
{% for volume in container.volumes | default([]) %}
-v {{ volume.src }}:{{ volume.dest }} \
{% endfor %}
{{ container.image }} {% for cmd in container.cmd | default([]) %}{{ cmd }} {% endfor %}
Here is the jinja templated stop bash script:
#!/usr/bin/env bash
###############################################################################
# \_o< WARNING : This file is being managed by ansible! >o_/ #
# ~~~~ ~~~~ #
###############################################################################
set -euo pipefail
if [[ ! "$SERVICE_RESULT" = success ]]; then
podman stop --ignore --cidfile=/run/podman-{{ container.name }}.ctr-id
fi
podman rm -f --ignore --cidfile=/run/podman-{{ container.name }}.ctr-id
Here is the jinja templated systemd unit service:
###############################################################################
# \_o< WARNING : This file is being managed by ansible! >o_/ #
# ~~~~ ~~~~ #
###############################################################################
[Unit]
After=network-online.target
Description=Podman container {{ container.name }}
[Service]
ExecStart=/etc/podman/{{ container.name }}-start.sh
ExecStop=/etc/podman/{{ container.name }}-stop.sh
NotifyAccess=all
Restart=always
TimeoutStartSec=0
TimeoutStopSec=120
Type=notify
[Install]
WantedBy=multi-user.target
I do not call the role from a playbook, I prefer running the setup from an application’s role that relies on podman using a meta/main.yaml containing something like:
---
dependencies:
- role: 'borg'
- role: 'nginx'
- role: 'podman'
Then from a tasks file:
- include_role:
name: 'podman'
tasks_from: 'container'
vars:
container:
cmd: ['--config-path', '/srv/cfg/conf.php']
name: 'privatebin'
env_vars:
- name: 'PHP_TZ'
value: 'Europe/Paris'
- name: 'TZ'
value: 'Europe/Paris'
image: 'docker.io/privatebin/nginx-fpm-alpine:1.7.4'
publishs:
- container_port: '8080'
host_port: '8082'
ip: '127.0.0.1'
volumes:
- dest: '/srv/cfg/conf.php:ro'
src: '/etc/privatebin.conf.php'
- dest: '/srv/data'
src: '/srv/privatebin'
I enjoy this design, it works really well. I am missing a task for deprovisioning a container but I have not needed it yet.
8.11.2024 00:00Podman ansible roleBefore succumbing to nixos, I had been using an ansible role to manage my nginx web servers. Now that I am in need of it again I refined it a bit: here is the result.
The role has OS specific vars in files named after the operating system. For example in vars/Debian.yaml
I have:
---
nginx:
etc_dir: '/etc/nginx'
pid_file: '/run/nginx.pid'
www_user: 'www-data'
While in vars/FreeBSD.yaml
I have:
---
nginx:
etc_dir: '/usr/local/etc/nginx'
pid_file: '/var/run/nginx.pid'
www_user: 'www'
The main tasks file setups nginx and the global configuration common to all virtual hosts:
---
- include_vars: '{{ ansible_distribution }}.yaml'
- name: 'Install nginx'
package:
name:
- 'nginx'
- name: 'Make nginx vhost directory'
file:
path: '{{ nginx.etc_dir }}/vhost.d'
mode: '0755'
owner: 'root'
state: 'directory'
- name: 'Deploy nginx configuration files'
copy:
src: '{{ item }}'
dest: '{{ nginx.etc_dir }}/{{ item }}'
notify: 'reload nginx'
loop:
- 'headers_base.conf'
- 'headers_secure.conf'
- 'headers_static.conf'
- 'headers_unsafe_inline_csp.conf'
- name: 'Deploy nginx configuration template'
template:
src: 'nginx.conf'
dest: '{{ nginx.etc_dir }}/'
notify: 'reload nginx'
- name: 'Deploy nginx certificates'
copy:
src: '{{ item }}'
dest: '{{ nginx.etc_dir }}/'
notify: 'reload nginx'
loop:
- 'adyxax.org.fullchain'
- 'adyxax.org.key'
- 'dh4096.pem'
- name: 'Start nginx and activate it on boot'
service:
name: 'nginx'
enabled: true
state: 'started'
I have a vhost.yaml
task file which currently simply deploys a file and reload nginx:
- name: 'Deploy {{ vhost.name }} vhost {{ vhost.path }}'
template:
src: '{{ vhost.path }}'
dest: '{{ nginx.etc_dir }}/vhost.d/{{ vhost.name }}.conf'
notify: 'reload nginx'
There is a single main.yaml
handler:
---
- name: 'reload nginx'
service:
name: 'nginx'
state: 'reloaded'
I deploy four configuration files in this role. These are all variants of the same theme and their purpose is just to prevent duplicating statements in the virtual hosts configuration files.
headers_base.conf
:
###############################################################################
# \_o< WARNING : This file is being managed by ansible! >o_/ #
# ~~~~ ~~~~ #
###############################################################################
add_header X-Frame-Options deny;
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options nosniff;
add_header Referrer-Policy strict-origin;
add_header Cache-Control no-transform;
add_header Permissions-Policy "accelerometer=(), camera=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), payment=(), usb=()";
# 6 months HSTS pinning
add_header Strict-Transport-Security max-age=16000000;
headers_secure.conf
:
###############################################################################
# \_o< WARNING : This file is being managed by ansible! >o_/ #
# ~~~~ ~~~~ #
###############################################################################
include headers_base.conf;
add_header Content-Security-Policy "script-src 'self'";
headers_static.conf
:
###############################################################################
# \_o< WARNING : This file is being managed by ansible! >o_/ #
# ~~~~ ~~~~ #
###############################################################################
include headers_secure.conf;
# Infinite caching
add_header Cache-Control "public, max-age=31536000, immutable";
headers_unsafe_inline_csp.conf
:
###############################################################################
# \_o< WARNING : This file is being managed by ansible! >o_/ #
# ~~~~ ~~~~ #
###############################################################################
include headers_base.conf;
add_header Content-Security-Policy "script-src 'self' 'unsafe-inline'";
I have a single template for nginx.conf
:
###############################################################################
# \_o< WARNING : This file is being managed by ansible! >o_/ #
# ~~~~ ~~~~ #
###############################################################################
user {{ nginx.www_user }};
worker_processes auto;
pid {{ nginx.pid_file }};
error_log /var/log/nginx/error.log;
events {
worker_connections 1024;
}
http {
include mime.types;
types_hash_max_size 4096;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers on;
gzip on;
gzip_static on;
gzip_vary on;
gzip_comp_level 5;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private auth;
gzip_types application/atom+xml application/geo+json application/javascript application/json application/ld+json application/manifest+json application/rdf+xml application/vnd.ms-fontobject application/wasm application/x-rss+xml application/x-web-app-manifest+json application/xhtml+xml application/xliff+xml application/xml font/collection font/otf font/ttf image/bmp image/svg+xml image/vnd.microsoft.icon text/cache-manifest text/calendar text/css text/csv text/javascript text/markdown text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/xml;
proxy_redirect off;
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_http_version 1.1;
proxy_set_header "Connection" "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
client_max_body_size 40M;
server_tokens off;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
fastcgi_param REQUEST_METHOD $request_method;
fastcgi_param CONTENT_TYPE $content_type;
fastcgi_param CONTENT_LENGTH $content_length;
fastcgi_param SCRIPT_NAME $fastcgi_script_name;
fastcgi_param REQUEST_URI $request_uri;
fastcgi_param DOCUMENT_URI $document_uri;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SERVER_PROTOCOL $server_protocol;
fastcgi_param REQUEST_SCHEME $scheme;
fastcgi_param HTTPS $https if_not_empty;
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;
fastcgi_param REMOTE_ADDR $remote_addr;
fastcgi_param REMOTE_PORT $remote_port;
fastcgi_param REMOTE_USER $remote_user;
fastcgi_param SERVER_ADDR $server_addr;
fastcgi_param SERVER_PORT $server_port;
fastcgi_param SERVER_NAME $server_name;
# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;
uwsgi_param QUERY_STRING $query_string;
uwsgi_param REQUEST_METHOD $request_method;
uwsgi_param CONTENT_TYPE $content_type;
uwsgi_param CONTENT_LENGTH $content_length;
uwsgi_param REQUEST_URI $request_uri;
uwsgi_param PATH_INFO $document_uri;
uwsgi_param DOCUMENT_ROOT $document_root;
uwsgi_param SERVER_PROTOCOL $server_protocol;
uwsgi_param REQUEST_SCHEME $scheme;
uwsgi_param HTTPS $https if_not_empty;
uwsgi_param REMOTE_ADDR $remote_addr;
uwsgi_param REMOTE_PORT $remote_port;
uwsgi_param SERVER_PORT $server_port;
uwsgi_param SERVER_NAME $server_name;
ssl_dhparam dh4096.pem;
ssl_session_cache shared:SSL:2m;
ssl_session_timeout 1h;
ssl_session_tickets off;
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
access_log off;
server_name_in_redirect off;
return 444;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name _;
access_log off;
server_name_in_redirect off;
return 444;
ssl_certificate adyxax.org.fullchain;
ssl_certificate_key adyxax.org.key;
}
include vhost.d/*.conf;
}
I do not call the role from a playbook, I prefer running the setup from an application’s role that relies on nginx using a meta/main.yaml
containing something like:
---
dependencies:
- role: 'borg'
- role: 'nginx'
- role: 'postgresql'
Then from a tasks file:
- include_role:
name: 'nginx'
tasks_from: 'vhost'
vars:
vhost:
name: 'www'
path: 'roles/www.adyxax.org/files/nginx-vhost.conf'
I did not find an elegant way to pass a file path local to one role to another. Because of that, here I just specify the full vhost file path complete with the roles/
prefix.
I you have an elegant idea for passing the local file path from one role to another do not hesitate to ping me!
28.10.2024 00:00Nginx ansible roleI listened to the Graphics Audio adaptation of Oathbringer. Just like for the previous audio books, I must say it was a great experience that I highly recommend. The level of realization is just as good, and they kept the same actors! And to my delight, it was again the same narrator as the first stormlight archive books!
22.10.2024 00:00OathbringerI wrote a shell script to gather ec2 instance metadata with an ansible fact.
I am using POSIX /bin/sh
because I wanted to support a variety of operating systems. Besides that, the only dependency is curl
:
#!/bin/sh
set -eu
metadata() {
local METHOD=$1
local URI_PATH=$2
local TOKEN="${3:-}"
local HEADER
if [ -z "${TOKEN}" ]; then
HEADER='X-aws-ec2-metadata-token-ttl-seconds: 21600' # request a 6 hours token
else
HEADER="X-aws-ec2-metadata-token: ${METADATA_TOKEN}"
fi
curl -sSfL --request "${METHOD}" \
"http://169.254.169.254/latest${URI_PATH}" \
--header "${HEADER}"
}
METADATA_TOKEN=$(metadata PUT /api/token)
KEYS=$(metadata GET /meta-data/tags/instance "${METADATA_TOKEN}")
PREFIX='{'
for KEY in $KEYS; do
VALUE=$(metadata GET "/meta-data/tags/instance/${KEY}" "${METADATA_TOKEN}")
printf '%s"%s":"%s"' "${PREFIX}" "${KEY}" "${VALUE}"
PREFIX=','
done
printf '}'
Depending on curl can be avoided. If you are willing to use netcat instead and be declared a madman by your colleagues, you can rewrite the function with:
metadata() {
local METHOD=$1
local URI_PATH=$2
local TOKEN="${3:-}"
local HEADER
if [ -z "${TOKEN}" ]; then
HEADER='X-aws-ec2-metadata-token-ttl-seconds: 21600' # request a 6 hours token
else
HEADER="X-aws-ec2-metadata-token: ${METADATA_TOKEN}"
fi
printf "${METHOD} /latest${URI_PATH} HTTP/1.0\r\n%s\r\n\r\n" \
"${HEADER}" \
| nc -w 5 169.254.169.254 80 | tail -n 1
}
I deploy the script this way:
- name: 'Deploy ec2 metadata fact gathering script'
copy:
src: 'ec2_metadata.sh'
dest: '/etc/ansible/facts.d/ec2_metadata.fact'
owner: 'root'
mode: '0500'
register: 'ec2_metadata_fact'
- name: 'reload facts'
setup: 'filter=ansible_local'
when: 'ec2_metadata_fact.changed'
It works, is simple and I like it. I am happy!
12.10.2024 00:00Shell script for gathering imdsv2 instance metadata on AWS ec2