lade...

Blog.thibmaek.com

Blog.thibmaek.com

an avatar

a logo

@thibmaek's thoughts

Ramblings about improving my house with automation, AI, tech and life in general

🌐 Visit Blog.thibmaek.com 🌐 Blog.thibmaek.com besuchen

Write rieview✍️ Rezension schreiben✍️ Get Badge!🏷️ Abzeichen holen!🏷️ Edit entry⚙️ Eintrag bearbeiten⚙️ News📰 Neuigkeiten📰

Write review

Tags: automation general improving ramblings

Blog.thibmaek.com hosts 1 (1) users Blog.thibmaek.com beherbergt 1 (1) Benutzer insgesamt (powered by Ghost)

Server location (104.20.23.98):Serverstandort (104.20.23.98 ):US United States

Server location (172.66.175.239):Serverstandort (172.66.175.239 ):US United States

Rieviews

Bewertungen

not yet rated noch nicht bewertet 0%

Be the first one
and write a rieview
about blog.thibmaek.com.
Sein Sie der erste
und schreiben Sie eine Rezension
über blog.thibmaek.com.

Blog.thibmaek.com News

AI-powered photography: Taking my Ricoh GRIIIx to Ireland

https://blog.thibmaek.com/ai-pow...

AI-powered photography: Taking my Ricoh GRIIIx to Ireland

I'm going on a short road trip to Ireland soon, which is the perfect opportunity to take my Ricoh GR III back out of the drawer. It's been sitting unused since I lent it to a friend for his trip, and the weather in Belgium has been quite dull, limiting my photo-walks in the city.

My friend adjusted some settings, and rather than trying to remember my previous configurations, I decided to just factory reset the camera and then start using some of Reggie's best practices for the camera instead:

Manually this would require a lot of work: checking recipes on ricohrecipes.com and watching/pausing Reggie's video to apply the settings. So, I wondered, can't I use AI to help with this?

Summarizing Reggie's tips

I've recently been really into Superwhisper, an AI voice assistant that can work in different modes (basically system prompts and model selections) to customize its text output. I opened Reggie's YouTube video, hit play, and activated Superwhisper in Note mode, which provided me with a clear recap of his video.

Reggie's Settings

Focus Settings

  • Focus Mode: Auto Area or AF Center (default for subjects in center).
  • Touch AF: Used manually if auto area fails.
  • Face/Eye Detection: On (useful for portraits).
  • AF Assist Light: Off (to avoid distracting subjects).
  • Snap Distance: Usually 1.5m or 2m (adjusted in the field; not a fixed default).
  • Full Press Snap: On (though rarely used as it hides focus range).
  • Focus Peaking: Off.
  • AF Continuous (AFC): Priority set to 2fps (note: AFC is rarely used by the speaker).
  • Manual Focus/Auto Magnification: Off.

Exposure & Metering

  • Exposure Mode: Aperture Priority (used for U1, U2, and U3 modes).
  • AE Metering: Multi-segment (default); occasionally Highlight-weighted.
  • ISO Settings:
    • Mode: Auto.
    • Upper Limit: 6400.
    • Lower Limit: 100.
  • Minimum Shutter Speeds (User Modes):
    • U1: 1/125s (Portraits/Action).
    • U2: 1/60s (Still life/Low light).
    • U3: 1/15s (Intentional low light blur).

Image & Color Settings

  • White Balance: Multi-Auto White Balance (MAWB) with a shift of 0:A6 (adds warmth).
  • WB Fine Tuning: Off.
  • File Format: RAW + JPEG.
  • Aspect Ratio: 3:2.
  • Color Space: sRGB (preferred for web consistency).
  • Image Control (Recipes):
    • Slot 1: Reggie’s Color Negative.
    • Slot 2: Reggie’s Monotone Standard.
  • Peripheral Illumination Correction: On (corrects vignetting).
  • D-Range Correction: Highlight Correction (Auto), Shadow Correction (Medium).
  • Noise Reduction: Off.
  • Shake Reduction (IBIS): On.
  • Horizon Correction: Off.
  • HDF (Highlight Diffusion Filter): On (assigned to movie button).

Customization & Controls

  • User Modes (U1-U3): Primarily used to save different minimum shutter speeds.
  • Mode Dial (ADJ):
    1. Snap Focus
    2. Focus Setting
    3. Image Control (Recipes)
    4. AE Metering
    5. Outdoor View Setting
  • Function Buttons:
    • Fn Button: AE Lock (Set to "Keep AE Lock" to toggle on/off).
    • Left D-Pad: Crop shortcut.
    • Right D-Pad: Drive Mode/Multi-exposure.
    • Movie Button: HDF toggle.
  • Shutter Button: Set to AF + AE Lock.

Display & System

  • Electronic Level: Level and Tilt (Design Type 1).
  • Grid Guide: Center point line (not 3x3).
  • Instant Review: 0.5 seconds.
  • Auto Image Rotation: On.
  • Power Lamp: Off.
  • Shutter Sound: Off.
  • Auto Power Off: 1 Minute.
  • LCD Settings: Calibrated to +4/+4.

I could save that recap as an Obsidian note, apply all the desired settings in one go, and tweak settings here and there to my personal preferences.

Getting the right recipes

In the past, I've used a combination of recipes from the Ricoh Recipes app, Reddit, and recommendations from friends. As I accumulated them, I stored the recipes in various places. Now was the time to create a one-to-rule-them-all note with all the recipes so I could easily scroll through them or, even better, feed them into an AI assistant.

Since an LLM cannot easily extract data from an iOS app like the Ricoh Recipes one, I instead used the links from ricohrecipes.com and provided them to a single chat, asking it to extract only the settings for the recipes using Raycast.

AI-powered photography: Taking my Ricoh GRIIIx to Ireland

That provided me with a complete list, which I then pasted into a Raycast Note (I haven't set up an MCP for Obsidian yet). I asked it to review the recipes from a photographer's perspective, checking the weather for the locations and identifying the top three recipes to use for my trip.

AI-powered photography: Taking my Ricoh GRIIIx to Ireland
AI-powered photography: Taking my Ricoh GRIIIx to Ireland

The result

In about an hour, I was ready with the Ricoh camera for my trip. Normally, I would have taken an afternoon to watch videos, research websites, and apply and test the settings.

I'm curious to see how it turns out using these settings for the first time in the wild, and I will update with some sample pictures here after the trip. 📸

15.2.2026 16:32AI-powered photography: Taking my Ricoh GRIIIx to Ireland
https://blog.thibmaek.com/ai-pow...

Using TagStudio to organise my 3D printing files

https://blog.thibmaek.com/using-...

One of my biggest hobby is 3D printing. Over the years I've gotten better at it, bought a better printer and dipped my toes in the waters of Fusion360 to make my own designs.

But managing, storing & categorizing all the typical 3D print artifacts (STLs, OpenSCAD files, 3MFs, PDFs) you need before the finished plastic part rolls off the printer was always a tedious job. I recently however landed on what I'd say is the solution I'm most satisfied with comparing it to other attempts.

How I previously approached it

I started out not storing files at all, I considered what's the use if I printed it to have the digital file still in the archive later? I was wrong 😅, it seems if you make a true hobby out of it you discover & print so many files, that at a certain point you need the exact same bracket you printed 4 months ago but land on:

what was the Thingiverse page for this again?

So I started organizing my files on my Mac. Create folder structures (STLs/Electronics, STLs/Hooks, CAD/Clips) and tagging them using Finder. Those tags proved to be very error prone in that version of macOS. The structure was a bit confusing as well, resulting in some duplicates and not being able to remember where I dropped a file.

So I moved to Manyfold which promised to be the selfhosted solution to asset management for 3D printing. It worked pretty well for a while. I got 3D previews, tags to manage my file and search. But as my collection grew, the app became sluggish, I had to dedicate an obscene amount of RAM to the container and it frequently froze or crashed when viewing the STL renders in the browser.

My current solution

I wasn't a 100% happy with how Manyfold was my final solution. One night while watching some YouTube videos about Obsidian I stumbled across this guy on the internet that had Obsidian videos, but was also creating an unrelated application called TagStudio

Though he demoed it as asset management for memes, the features had all I needed to organise my STLs:

It doesn't have STL preview (yet?) but I'm willing to turn a blind eye if I get better findability & overview. I can always view the STL in Finder anyway.

So I jumped once again and downloaded TagStudio, adding a new folder for 3D print files and started tagging them. I primarily download from Printables and here's how I do it today still:

I always make sure to store both the PDF & STLs/3MFs. PDFs give me instructions on the print and other important info
I then add a Printed tag to all files I've printed before
And create categories like Manufacturable and Skadis that I can then chain other tags to

Queries that I often use like listing all PDFs, finding all untagged entries in the folder or finding my favorite Gridfinity prints are then all summarized in a Raycast note so I have it nearby any time:

18.1.2026 17:27Using TagStudio to organise my 3D printing files
https://blog.thibmaek.com/using-...

Moving efficiently through the kitchen

https://blog.thibmaek.com/moving...

Moving efficiently through the kitchen

Over the 2025 holiday period, I've cooked a variety of dishes, including canapés, starters, mains, desserts, and soups for myself and guests coming over. After Christmas dinner, I always use the turkey bones to make stock, which I can store in the freezer for use throughout the year. While making this stock, I paused to reflect on how efficiently I can move in my kitchen kitchen during such preparations and wondered 'what actually makes that I can do this?'

Magnetic knife strip

Moving efficiently through the kitchen

It all starts here, this was the best upgrade I made to the kitchen. No more clunky knife blocks, very slim on storage space and immediately within reach of my prep area whenever I want to use a different knife. I keep the knives I use most (left two ones) on the front so they're easiest to grab.

PS: Someone told me blocks like this are supposedly bad for the knives, since it dulls the blade every time you put it on the strip. Haven't experienced that myself.

PPS: You'll still spot a knife block in the back, I reserve that for knives I rarely use (like a deboning one) and heavier items (my honing steel)

Seasoning station and a spoon rest

Moving efficiently through the kitchen

Close to the stove I keep a jar of fine seasoning salt, the pepper mill and a mislabeled jar of coarse sea salt for finishing. I add salt and pepper in layers as I cook dishes and not having to resort to an annoying shaker or open an extra drawer allows for seasoning in a swift pinch.

I also use a spoon rest to keep spoons, knives, chopsticks out of the way but not laying there dirty on the kitchen countertop. My wife made it out of ceramics for me 😊

PS: Behind the stove I also keep an IKEA flower pot with some other elevating spices & seasonings that go for any dish (e.g Tabasco, Worcestershire, Tajin). Those are typically a bit clunky to get neatly into a drawer.

My Gridfinity compatible prep box

Moving efficiently through the kitchen

As a very basic entry into Fusion 360 and CAD modelling for 3D printing I've designed this kitchen organiser box. It's compatible with a Gridfinity 6x5 baseplate so that I can add modules to my liking.

I leveled up significantly after introducing this. Allowing me to reorganise on the fly, take out separate boxes to do something in another corner of the kitchen, have all my tasting spoons at hand, hang my wedding ring on a small hook when rolling meatballs. Best thing is that since it's portable I can easily move my kitchen workspace with me on retreats (where I typically like to cook for people).

PS: This could also be used for example for a cocktail workstation!

PPS: Since this was the prototype it actually misses some features from the design I've uploaded to Printables. I've since made it customizable with parameters, added notches in the sides for easier grabbing & carrying.

PPPS: As a fun home automation thing I've stuck an NFC sticker between the right corner of the box's edge and the bin with the spoons. If I tap my phone against it, it starts playing some music and lights up the kitchen counter.

Prep bowls

Moving efficiently through the kitchen

These small bowls are key to my kitchen. They're inexpensive, lightweight, fit easily in the washing machine and stack for easy storage. Super versatile things, I use them for making sauces & dressings, getting food out of the pan and moved to the side for later use, ice baths, or just storing leftovers or prepped items in the fridge.

PS: I also use these sheet-pan like oval shallow plates for the same purpose. They fit the dishwasher even better.

Spice drawer with common spices

Moving efficiently through the kitchen

This is the next level organising nerd in me that pops up. I have a sole drawer dedicated to frequently used spices, all in the same IKEA jars and with the same Dille & Kamille labels. The drawer is right next to the stove so I don't have to reach to far if I feel like adding something in the heat of the moment.

Moving efficiently through the kitchen

I then keep the spice pantry & lesser-used spices in a cabinet on the other side of the kitchen. Doesn't eat away precious countertop space but they're there if I want to refill a spice container in the drawer. I use simple masking tape & plastic deli containers to remember what they are.

A crate with stacked deli containers

Moving efficiently through the kitchen

There was a time when I saved too much deli containers than I needed, but these are just so useful. Chinese takeout, supermarket soup and then those easy Tupperware like containers from IKEA & Hema, I save all of those to store leftovers in the fridge or freezer or put away some prepped items for a later dinner party.

Biggest trick here is to use a medium sized foldable crate (this one is from Hema) so you can easily grab all of the containers and take out the right ones. Then it doesn't get all messy in the kitchen drawer or cabinet.

A sink caddy

Moving efficiently through the kitchen

I don't have the largest space near the sink here in the kitchen but having all cleaning items in a caddy keeps it tidy and maximizes the space for items that are drying. I'm a big fan of using the Scrub Mommy sponge that has two sides to thoroughly clean all the dirty dishes.

A simple steel wool brush helps me clean the kitchen sink if it has some residual oils or other dirt in it. I also designed a simple caddy for the two hemp sponges I use to wipe away dirt of the countertop (pictured on the right)

31.12.2025 13:48Moving efficiently through the kitchen
https://blog.thibmaek.com/moving...

tl;dr — Week 50, 2025

https://blog.thibmaek.com/tl-dr-...

Here's what I've liked reading this week, dive in!

🎸 Using Raycast AI to auto create song wishlists

I picked up writing an article again! Though it's a rather short read, it really helped solve an annoying problem for me as a music enthusiast with the help of Raycast & iOS.

Saving discovered music using AI
As a music enthusiast, I’m always discovering tracks across platforms. But managing a unified ‘wantlist’ for my Plex library, future purchases, or DJ sets has always been a challenge. Spotify, code exporters, screenshots in camera roll. There had to be a better way…

💡 Ikea rolled out their new Matter smart home lights & sensors.

I personally can't wait to get my hands on them in Belgium soon. https://www.ikea.com/global/en/newsroom/retail/the-new-smart-home-from-ikea-matter-compatible-251106/

🟠 Mistral releases new frontier models & alternative to Claude code

They released a new Mistral 3 model and also made the edge model Ministral 3 publicly available. I've installed it via Ollama and will try chatting with it in Open WebUI to see the performance for smaller tasks & chats.

Vibe has been fun to experiment with, being able to use their cloud hosted version of Devstral 2 for free or link it up to a local model.

Introducing: Devstral 2 and Mistral Vibe CLI. | Mistral AI
State-of-the-art, open-source agentic coding models and CLI agent.
Introducing Mistral 3 | Mistral AI
A family of frontier open-source multimodal models

🇨🇳 Things keep in moving in China's AI space

DeepSeek released new versions of their open weight model. ByteDance tries to roll out wired-to-your-life AI in China.

ByteDance and DeepSeek Are Placing Very Different AI Bets
The diverging path of China’s two leading AI players shows where the country’s artificial intelligence industry is headed.

🧭 Reads on engineering culture & principles

I rediscovered some bookmarked reads and new reads pointed out by coworkers on building an engineering culture, what principles bring clarity & direction to a team and how you innovate & evolve sustainably

Choose Boring Technology
bliki: Frequency Reduces Difficulty
“If it hurts - do it more often”. Good advice if the amount of pain raises exponentially with the time between actions, such as for integrating software.
Development Philosophy
This document sets guidelines for how we approach software development at Sentry.

🤝 Balancing trust & competence in the workplace

My good friend first, coworker second, Bart Schroyen sent me this article of how trust & competence tie into the workplace, building high performing teams.

The Slipstream Model of Competence
Why a High-Trust Environment Is More Important Than Working With Smart People

11.12.2025 11:11tl;dr — Week 50, 2025
https://blog.thibmaek.com/tl-dr-...

Saving discovered music using AI

https://blog.thibmaek.com/saving...

The problem I encountered

Saving discovered music using AI

As a music enthusiast, I enjoy exploring a wide range of genres and artists. I collect music for various reasons: records I plan to buy later, tracks I want to add to my Plex library (I’m a big Plexamp user!), or songs I can already imagine including in a DJ set. Since I prefer to own the digital files and store them in my Plex collection, I need a "wantlist" to refer to occasionally and archive into my library.

The challenge I’ve faced repeatedly is the lack of a simple way to combine all the different sources where I discover music into one list. My usual sources include YouTube, Shazam, Spotify, Instagram comments or posts, and random webpages.

In the past, I relied on Spotify, saving everything to a large "wantlist" playlist. However, this approach had limitations: some music wasn’t available on Spotify, or the versions differed. Switching between apps also made the process cumbersome.

I once attempted to write a Go exporter to aggregate lists from various services (YouTube playlists, Spotify playlists, my Shazam collection). But like many side projects, the effort quickly became overwhelming, and I abandoned it.

My best attempt so far at automating it was to take screenshots and store them in a specific Apple Notes folder. This worked well in some ways: I had everything in one folder, accessible from all my devices, and could screenshot any service on my phone. But there was a big downside: my camera roll filled up with screenshots, making it harder to find other photos when I needed them. It resulted in huge clutter when scrolling to find a certain photo memory.

Automating it with AI & Shortcuts, and the Action Button

Becoming to annoyed recently, I decided to spend some time on making it better. I wanted to use AI to identify the music and store the results in a plain text Apple Note. Since I lovingly use Raycast AI on my phone, I created a Shortcut tied to my Action button to discover and save music:

  1. It takes a screenshot of my entire screen.
  2. It opens the screenshot for markup, so I can draw and guide the AI on where to look.
  3. It sends the screenshot, along with a structured prompt, to an LLM via Raycast AI. (Using Mistral 3 as my default LLM)
  4. The AI provides the music details, and I receive an alert as part of the Shortcut. If the result is incorrect, I can stop the Shortcut to not clutter up the note with incorrect results.
  5. If I continue, it asks for the purpose of the music and stores it in different notes based on that purpose.

The process now works smoothly! Uploading the image takes a little time, but I’m usually not in a hurry when doing this. If I do want quicker results, I adapted a Shortcut I found online to first use OCR on the screenshot and send the extracted text as a plain text prompt to the LLM.

Saving discovered music using AI
The prompt I used for the LLM. The example input part is just some example text from a spotify screenshot to guide the LLM
Saving discovered music using AI
Using Raycast's Ask AI Shortcut with the prompt (stored in a variable) and giving it the markup result as an image attachment. I'm just using the default model (which I set to Mistral)
Saving discovered music using AI
And finally store it in separate notes per purpose

End result

Here's a short screen recording of the Shortcut in action on a killer Rod Stewart song. It doesn't show the song being added to my note, but trust me it works 😉

0:00
/0:21

Any questions? Like a copy of the Shortcut? Just send me a message and I'll help 👋

7.12.2025 17:16Saving discovered music using AI
https://blog.thibmaek.com/saving...

tl;dr — Week 47, 2025

https://blog.thibmaek.com/tl-dr-...

tl;dr — Week 47, 2025

Rushing this one a bit since I didn't really find the time to properly follow up and the list of interesting things I read sat idle for a while.

Still wanted to post about the following nice reads:

2x Performance, $300k Savings: A Case Study in Rewriting a Critical Service in Rust
Wu Xiaoyun’s portfolio page
tl;dr — Week 47, 2025

Short & sweet insight on using two high performing languages side by side at Tiktok


Cloud models · Ollama Blog
Cloud models are now in preview, letting you run larger models with fast, datacenter-grade hardware. You can keep using your local tools while running larger models that wouldn’t fit on a personal computer.
tl;dr — Week 47, 2025

I saw this one pop up and hoping for a use case where I can try it out. Using the same CLI and API as local models installed via Ollama, you can now also offload the compute to their (supposedly) secure & private cloud platform.


👏 This is an important moment for the React ecosystem. React and React Native are moving from Meta to a new React Foundation, a vendor-neutral organization that will steward React’s future and… | Expo
👏 This is an important moment for the React ecosystem. React and React Native are moving from Meta to a new React Foundation, a vendor-neutral organization that will steward React’s future and support the broader community. Expo is honored to be a founding member along with Meta, Microsoft, Amazon, Vercel, Callstack, and Software Mansion. The React Foundation will: ♢ Maintain React infrastructure like GitHub, CI, and trademarks ♢ Organize React Conf ♢ Support ecosystem projects through grants and community programs We’re proud to help shape React’s next chapter as it becomes even more open, collaborative, and community-driven. https://lnkd.in/dZuVgu3B
tl;dr — Week 47, 2025

What has always been implicit has now been made explicit: React Native is owned by the consortium of companies driving it for the last few years. Meta is taking a more formal/reviewing role and most notably Callstack, Expo, Software Mansion are driving development forward from now on.


Upgreat AI - Building Sustainable Compute Solutions
European AI infrastructure with sustainability, sovereignty, and developer experience at the core. Powered by renewable energy in Belgium.
tl;dr — Week 47, 2025

I was super impressed by seeing Upgrade Estate dip their toes in the cloud & AI waters here in Belgium. They're quite the cool & innovative company, and seeing them think about building a competitor to cloud hyperscalers and the AI moguls, with a heightened focus on sustainability is something that I'm personally a very big fan of and curious to see how it further develops.


epfl-llm/meditron-7b · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
tl;dr — Week 47, 2025

Learned about this by listening to the AI podcast from EPFL / Marcel Salathé. Though it's already a bit older I still found it very cool to see an LLM specifically trained for the medical space.

16.11.2025 15:36tl;dr — Week 47, 2025
https://blog.thibmaek.com/tl-dr-...

tl;dr — Week 36, 2025

https://blog.thibmaek.com/tl-dr-...

Weather in your calendar

Weather in Calendar ⛅️ 20°
You can now (again) get the weather forecast directly into your calendar. This local weather calendar uses emoji icons ⛅️ 🌧️ 🌦 🌨️ to display a 14 days forecast from OpenWeatherMap. Enter you city, adjust according to your preferences and make a free calendar.Works for all calendars supporting online .ics and emojis, like Google Calendar, Apple Calendar, Outlook on iOS, Android, MacOS and Windows.
tl;dr — Week 36, 2025
tl;dr — Week 36, 2025

A nice service that I stumbled across. Subscribe to an online calendar that displays the weather forecast right in your calendar application of choice. Useful when scheduling sun-required events.

Exploiting AI via the browser

OpenAI, Perplexity, Dia etc. are all focusing on transforming the browser as the next big gateway into AI for the common people. I can build directly upon last week's tldr on AI security, with Perplexity Comet proving the point to be weary of agentic AI.

Bringing sense to AI usage

A Better Way to Think About AI
AI can be used to automate tasks—and entire jobs. But it could also be designed to collaborate with humans. David Autor and James Manyika on why we should focus on the latter:
tl;dr — Week 36, 2025

This article was shared all over the internet the last weeks. How much cognitive power do we offload and are we willing to lose any?

Correlating pizza to crisis

Pentagon Pizza Index: Could Late-Night Orders Predict Global Crises?
Could a surge in pizza orders near the Pentagon in Washington signal major global news? Those who believe in the Pentagon Pizza Index think so.
tl;dr — Week 36, 2025

Every time a major event or crisis occurred near the Pentagon, neighboring pizza places saw their orders increase. Over time journalists became aware of this and started observing the pizza shops in order to assess seriousness of ongoing events. A fun read, with its own history tracking website

Hype sensitivity of the tech sector

Inloggen

This article is in Dutch

A highly resonating article. I often find myself wondering whether the tech sector doesn’t see consider itself too much larger than life. Are we losing the focus on delivering value over hype & self-credit?

9.9.2025 18:56tl;dr — Week 36, 2025
https://blog.thibmaek.com/tl-dr-...

tl;dr — Week 33, 2025

https://blog.thibmaek.com/tl-dr-...

What is tl;dr, an introduction


tl;dr — Week 33, 2025

Welcome to my latest experiment, a bi-weekly newsletter called Thib loved do read aka tl;dr

Anyone who really knows me knows I'm a lists guy. I make them for literally everything. And I'm also someone who can't help but share cool stuff I stumble across - a brilliant article, a new release of a piece of software I got excited about, a cool video I watched and learned a thing or two from, or just something that made me think "nice, that's clever."

So I've been thinking for a while now: why not combine these two things and share my lists?

I'll be hosting my own  curated list newsletter that collects nice articles, product launches, release notes, random columns — basically all the stuff that I really liked reading and .

💡
What topics can you expect in tl;dr?
Well since I'm currently really into reading how AI blends with businesses, how it all works under the hood there's going be a lot of that content

Where I want to take this

I aim to make it a bit of a personal experiment as well, where I will try to use an AI assistant to help me write the actual newsletter

For a while I've been toying with this idea to make an LLM aware of my voice of tone and writing style by feeding it my earlier written content, so that afterwards I can simply feed it my Karakeep list, notes of the week or maybe even Safari tabs on my phone and see if it can get the newsletter drafted for me.

Ideally I can just make micro-edits and post my ✨ editorial notes ✨ & thoughts. I'm not quite sure if I'll like it but let's see, the pivot to writing these myself is easy enough to make 😺

I won't be using AI right off the bat and will write them myself first. If I change that it will be indicated clearly in the email.

Here's what I liked reading


Simple AI mind tricks with big consequences

Prompt injection: A visual, non technical primer for ChatGPT users
What is going on here? Objectively, this seems really bad. ChatGPT seems to be unable to distinguish between what the user says and what documents or websites say the user said.
tl;dr — Week 33, 2025

It lays out the basics of prompt injection but, more importantly, demonstrates once again that no company, big or small, is safe from what is essentially a very basic AI mind-trick.

Home Assistant keeps making home automation AI friendly

2025.8: The summer of AI ☀️
AI Tasks have arrived! Enjoy streaming Text-to-Speech for faster voice responses, control individual group members directly from dialogs, weekday support in time triggers, improved area dashboards…
tl;dr — Week 33, 2025

I like the steps HA is taking to make AI feel more embedded in the platform, especially by focusing on local AI solutions like Ollama in their examples. With the introduction of AI Tasks, I am eager to get my hands on it and start thinking about and creating possible tasks such as weather reports, notifications, and schedule briefings using my locally hosted models.

A new edge model from Google

Introducing Gemma 3 270M: The compact model for hyper-efficient AI- Google Developers Blog
Explore Gemma 3 270M, a compact, energy-efficient AI model for task-specific fine-tuning, offering strong instruction-following and production-ready quantization.
tl;dr — Week 33, 2025

Google introduced a new lightweight/edge model variant of Gemma 3. Don't expect it to be very good as an assistant or to answer questions correctly, but I'm looking forward to try it out in Home Assistant's AI tasks and see how it does, or finetune it on my own content 🤔. Given that it’s a 270-million-parameter model, it should run quickly enough locally on recent Mac hardware.

Mistral Document AI lands on Azure

Deepening our Partnership with Mistral AI on Azure AI Foundry | Microsoft Community Hub
We’re excited to mark a new chapter in our collaboration with Mistral AI, a leading European AI innovator, with the launch of Mistral Document AI in Azure AI…
tl;dr — Week 33, 2025

I have long been a fan of Mistral. Releasing their Document AI offering on Azure will surely open doors for companies that may be hesitant to send data to Mistral's own cloud (La Plateforme) but are looking to leverage AI-enhanced document workflows.

AI and energy

Wendover Productions' videos are always a great source of learning new things for me. In this video, he explains how AI consumes energy from the grid and what that means for households. It's a very American problem (for now), but it provides a real understanding of what it means for our global energy infrastructure.

A very brief & clear warning on agentic AI

I've written a short bit on this exact video on LinkedIn before, but when paired with the first article in this newsletter, it becomes highly relevant again. I felt it was important to include it again here.

How digital time works

I knew what NTP was, but not how it worked. This short 8 min video breaks down the different layers of time syncing (Stratum levels) and explains how all of the digital clocks on our laptops, phones, or connected watches try to stay in sync with the correct time.

25.8.2025 07:24tl;dr — Week 33, 2025
https://blog.thibmaek.com/tl-dr-...

Adding Mistral OCR to LiteLLM

https://blog.thibmaek.com/adding...

Prelude

Adding Mistral OCR to LiteLLM

I've been recently playing around with AI a lot. To get a solid grasp of the providers on the market, the performance of selfhosting, understanding the concepts of MCP, RAG, prompt engineering and so forth.

Another thing that's recently on my radar as well is consciously choosing for European technology. I believe with everything going on right now in the world and upcoming strong innovation happening in the EU tech scene this is the right bet to take.

So, let's combine those two and look into how you can integrate Mistral's new OCR feature (a European AI provider) together with LiteLLM (a selfhosted model gateway).

💡
In the end, as an experiment, I also asked Mistral Le Chat to draft me this blogpost vibe-write. Not perfect but only had to make a few minor corrections to keep my touch to it.

Introduction

Combining OCR with LLMs, such as passing Mistral OCR results to Mistral Small, opens up interesting use cases.

For instance, you can extract text from receipts or long white paper PDFs and then analyze or generate insights using that LLM. This combination is particularly useful for automating data extraction, enhancing document processing workflows, and enabling advanced text analysis from visual content. I'll take you through some of the details on how to set up Mistral's relatively new OCR feature in LiteLLM.

Having LiteLLM is a real blessing because you can have a single gateway managing all your models (not clicking around on different websites or API endpoints) and get usage & billing insights per platform consuming your gateway (e.g Open WebUI, Bruno, your Open AI compatible app...)

Adding the Mistral OCR Passthrough Route

To integrate Mistral OCR with LiteLLM, the first step is to configure a passthrough route in LiteLLM. This route will allow LiteLLM to communicate with the Mistral OCR service by just relaying the request directly. That means that LiteLLM does not actually transform or dictate any data but really acts as a proxy just 1:1 passing through the request to Mistral's endpoint.

This configuration cannot be done through the LiteLLM UI, you will need to modify the YAML configuration file directly. That's a bit of a bummer and I also don't understand why this isn't possible, but hey, might come in a later release.

In LiteLLM's config.yaml add the following pass_through_endpoints configuration under the general_settings:

general_settings:
  pass_through_endpoints:
    - path: "/mistral/v1/ocr"
      target: "https://api.mistral.ai/v1/ocr"
      headers:
        Authorization: "bearer os.environ/MISTRAL_API_KEY"
        content-type: application/json
        accept: application/json
      forward_headers: True

We don't want to hardcode our Mistral API key so passed it as an environment variable. Make sure that however you're running LiteLLM you've set the env var. I typically run everything in Docker Compose, and provide that as a litellm.env file to my Compose config's env_file directive:

DATABASE_URL="postgresql://llmproxy:*****@litellm_db:5432/litellm"
STORE_MODEL_IN_DB=True
MASTER_KEY="*******"

POSTGRES_DB=litellm
POSTGRES_USER=llmproxy
POSTGRES_PASSWORD=******

# Ollama
OLLAMA_API_BASE=http://******:11434
OLLAMA_API_KEY=""

+MISTRAL_API_KEY=********

Calling the Mistral OCR API

Once you have configured the passthrough route and restarted LiteLLM, you can start calling the Mistral OCR API through LiteLLM. Below are examples of how to make API calls.

Example 1: Image OCR Request

To send an image for OCR processing, you can use the following curl command:

$ curl --request POST \
  --url http://litellmhost.local:4000/mistral/v1/ocr \
  --header 'authorization: Bearer LITELLM_API_KEY' \
  --header 'content-type: application/json' \
  --data '{
  "model": "mistral-ocr-latest",
  "document": {
    "image_url": "https://raw.githubusercontent.com/mistralai/cookbook/refs/heads/main/mistral/ocr/receipt.png"
  }
}'

Example response:

{
  "pages": [
    {
      "index": 0,
      "markdown": "# PLACE FACE UP ON DASH <br> CITY OF PALO ALTO <br> NOT VALID FOR ONSTREET PARKING \n\nExpiration Date/Time 11:59 PM\n\nAUG 19, 2024\n\nPurchase Date/Time: 01:34pm Aug 19, 2024\nTotal Due: $\\$ 15.00$\nRate: Daily Parking\nTotal Paid: $\\$ 15.00$\nPmt Type: CC (Swipe)\nTicket \\#: 00005883\nS/N \\#: 520117260957\nSetting: Permit Machines\nMach Name: Civic Center\n\\#*****-1224, Visa\nDISPLAY FACE UP ON DASH\n\nPERMIT EXPIRES\nAT MIDNIGHT",
      "images": [],
      "dimensions": {
        "dpi": 200,
        "height": 3210,
        "width": 1806
      }
    }
  ],
  "model": "mistral-ocr-2503-completion",
  "usage_info": {
    "pages_processed": 1,
    "doc_size_bytes": 3110191
  }
}

Example 2: OCR Request for PDFs

OCR'ing a PDF also also straight forward but uses another request body format. I've provided a screenshot here of Bruno that shows the request format & response to OCR my public resume:

Adding Mistral OCR to LiteLLM

Current limitations

While integrating Mistral OCR with LiteLLM offers several benefits (unified interface, scalability, single source of truth), there are still some areas that need improvement IMO to make this setup truly compelling. The most important one being cost monitoring.

⚠️
Since Mistral OCR is integrated as a passthrough API in LiteLLM, it is not possible to monitor costs or set budgets as you can with regular models in LiteLLM. This limitation means you need to manage cost tracking separately, which can add complexity to your operations.

Conclusion

Integrating Mistral OCR with LiteLLM offers a streamlined approach to managing OCR tasks within a unified AI model gateway.

While the current passthrough API setup has limitations, particularly in cost monitoring, the benefits of a centralized interface and enhanced functionality make it a valuable addition. Next steps would definitely be to look how you can integrate Mistral OCR with a Mistral model like Pixtral or Small to do actual processing.

I'm thinking of integrating those Mistral LLM models via LiteLLM to:

24.4.2025 17:17Adding Mistral OCR to LiteLLM
https://blog.thibmaek.com/adding...

Labelling Zigbee battery powered devices

https://blog.thibmaek.com/labell...

💡
A short post about the importance of physically labelling battery powered Zigbee devices (more correctly: an End Device).
Labelling Zigbee battery powered devices

I invested quite a lot in Zigbee hardware over the years, resulting in all my lights being smart lights and having accompanying Zigbee remote controllers on the wall. Since I prefer a similar look and feel I have quite a lot of copies of the same model like IKEA's Strybar remote.

Labelling Zigbee battery powered devices
List of IKEA Strybar devices in Zigbee2MQTT

Once their battery dies, instead of instantly replacing the battery, I tend to resort to automations (like automatic presence lighting when entering a room), or switching the light on with the Home Assistant app, thus leaving the depleted device on the wall for a longer time.

It's only when a handful of them are depleted that I take them all upstairs to my desk, plug the rechargeable batteries into the charger and swap in new coin cell batteries. But this comes with a downfall. Here's how it looks after all batteries are refreshed. Have fun guessing where each remote went in the corresponding room:

Labelling Zigbee battery powered devices

In theory I could go to the room, press every remote until I find the right one, to the annoyance of my wife seeing lights magically turn on or off in random rooms she might be in. Or I might even change my habit of saving them up for a big battery reload/replace and do that adhoc when a battery dies.

But changing habits is hard and I resorted to a much simpler route: a Dymo label maker. Smack a label on it and never have to guess the device again! Two words of caution here though:

  1. Use something unique that is bound to the device. Don't just print the room the device is in on the label. Once you repurpose the device for another room or use beyond a room it no longer corresponds and you'll have to swap the label again
  2. I thought the network address was a short label (e.g 0x2B84) but this resets if you ever have to pair the device again. I opted for the IEEE Address which is the equivalent of a MAC address in Zigbee networking and is unique to the device without changing after a re-pair.
Labelling Zigbee battery powered devices
Four neatly labeled devices. Sorry for the inconsistent label placement

Now the only parameter I should adjust over time is the device friendly name in Zigbee2MQTT / Home Assistant. If I enter the last four characters of the label in Zigbee2MQTT I'm immediately able to see where the device should go after a battery swap:

Labelling Zigbee battery powered devices
🌟
Conclusion: I thought this was a really nice example of how the dumb aspect of a sticker with text can have a nice impact in a smart home driven house and avoid annoyance, maintenance & habit change.

10.9.2024 15:34Labelling Zigbee battery powered devices
https://blog.thibmaek.com/labell...
Subscribe

🔝

Datenschutzerklärung    Impressum