Ramblings about improving my house with automation, AI, tech and life in general
Write rieview✍️ Rezension schreiben✍️ Get Badge!🏷️ Abzeichen holen!🏷️ Edit entry⚙️ Eintrag bearbeiten⚙️ News📰 Neuigkeiten📰
Tags: automation general improving ramblings
US United States
US United States
I'm going on a short road trip to Ireland soon, which is the perfect opportunity to take my Ricoh GR III back out of the drawer. It's been sitting unused since I lent it to a friend for his trip, and the weather in Belgium has been quite dull, limiting my photo-walks in the city.
My friend adjusted some settings, and rather than trying to remember my previous configurations, I decided to just factory reset the camera and then start using some of Reggie's best practices for the camera instead:
Manually this would require a lot of work: checking recipes on ricohrecipes.com and watching/pausing Reggie's video to apply the settings. So, I wondered, can't I use AI to help with this?
I've recently been really into Superwhisper, an AI voice assistant that can work in different modes (basically system prompts and model selections) to customize its text output. I opened Reggie's YouTube video, hit play, and activated Superwhisper in Note mode, which provided me with a clear recap of his video.
Focus Settings
Exposure & Metering
Image & Color Settings
Customization & Controls
Display & System
I could save that recap as an Obsidian note, apply all the desired settings in one go, and tweak settings here and there to my personal preferences.
In the past, I've used a combination of recipes from the Ricoh Recipes app, Reddit, and recommendations from friends. As I accumulated them, I stored the recipes in various places. Now was the time to create a one-to-rule-them-all note with all the recipes so I could easily scroll through them or, even better, feed them into an AI assistant.
Since an LLM cannot easily extract data from an iOS app like the Ricoh Recipes one, I instead used the links from ricohrecipes.com and provided them to a single chat, asking it to extract only the settings for the recipes using Raycast.
That provided me with a complete list, which I then pasted into a Raycast Note (I haven't set up an MCP for Obsidian yet). I asked it to review the recipes from a photographer's perspective, checking the weather for the locations and identifying the top three recipes to use for my trip.
In about an hour, I was ready with the Ricoh camera for my trip. Normally, I would have taken an afternoon to watch videos, research websites, and apply and test the settings.
I'm curious to see how it turns out using these settings for the first time in the wild, and I will update with some sample pictures here after the trip. 📸
15.2.2026 16:32AI-powered photography: Taking my Ricoh GRIIIx to IrelandOne of my biggest hobby is 3D printing. Over the years I've gotten better at it, bought a better printer and dipped my toes in the waters of Fusion360 to make my own designs.
But managing, storing & categorizing all the typical 3D print artifacts (STLs, OpenSCAD files, 3MFs, PDFs) you need before the finished plastic part rolls off the printer was always a tedious job. I recently however landed on what I'd say is the solution I'm most satisfied with comparing it to other attempts.
I started out not storing files at all, I considered what's the use if I printed it to have the digital file still in the archive later? I was wrong 😅, it seems if you make a true hobby out of it you discover & print so many files, that at a certain point you need the exact same bracket you printed 4 months ago but land on:
what was the Thingiverse page for this again?
So I started organizing my files on my Mac. Create folder structures (STLs/Electronics, STLs/Hooks, CAD/Clips) and tagging them using Finder. Those tags proved to be very error prone in that version of macOS. The structure was a bit confusing as well, resulting in some duplicates and not being able to remember where I dropped a file.
So I moved to Manyfold which promised to be the selfhosted solution to asset management for 3D printing. It worked pretty well for a while. I got 3D previews, tags to manage my file and search. But as my collection grew, the app became sluggish, I had to dedicate an obscene amount of RAM to the container and it frequently froze or crashed when viewing the STL renders in the browser.
I wasn't a 100% happy with how Manyfold was my final solution. One night while watching some YouTube videos about Obsidian I stumbled across this guy on the internet that had Obsidian videos, but was also creating an unrelated application called TagStudio
Though he demoed it as asset management for memes, the features had all I needed to organise my STLs:
It doesn't have STL preview (yet?) but I'm willing to turn a blind eye if I get better findability & overview. I can always view the STL in Finder anyway.
So I jumped once again and downloaded TagStudio, adding a new folder for 3D print files and started tagging them. I primarily download from Printables and here's how I do it today still:
Queries that I often use like listing all PDFs, finding all untagged entries in the folder or finding my favorite Gridfinity prints are then all summarized in a Raycast note so I have it nearby any time:
Over the 2025 holiday period, I've cooked a variety of dishes, including canapés, starters, mains, desserts, and soups for myself and guests coming over. After Christmas dinner, I always use the turkey bones to make stock, which I can store in the freezer for use throughout the year. While making this stock, I paused to reflect on how efficiently I can move in my kitchen kitchen during such preparations and wondered 'what actually makes that I can do this?'
It all starts here, this was the best upgrade I made to the kitchen. No more clunky knife blocks, very slim on storage space and immediately within reach of my prep area whenever I want to use a different knife. I keep the knives I use most (left two ones) on the front so they're easiest to grab.
PS: Someone told me blocks like this are supposedly bad for the knives, since it dulls the blade every time you put it on the strip. Haven't experienced that myself.
PPS: You'll still spot a knife block in the back, I reserve that for knives I rarely use (like a deboning one) and heavier items (my honing steel)
Close to the stove I keep a jar of fine seasoning salt, the pepper mill and a mislabeled jar of coarse sea salt for finishing. I add salt and pepper in layers as I cook dishes and not having to resort to an annoying shaker or open an extra drawer allows for seasoning in a swift pinch.
I also use a spoon rest to keep spoons, knives, chopsticks out of the way but not laying there dirty on the kitchen countertop. My wife made it out of ceramics for me 😊
PS: Behind the stove I also keep an IKEA flower pot with some other elevating spices & seasonings that go for any dish (e.g Tabasco, Worcestershire, Tajin). Those are typically a bit clunky to get neatly into a drawer.
As a very basic entry into Fusion 360 and CAD modelling for 3D printing I've designed this kitchen organiser box. It's compatible with a Gridfinity 6x5 baseplate so that I can add modules to my liking.
I leveled up significantly after introducing this. Allowing me to reorganise on the fly, take out separate boxes to do something in another corner of the kitchen, have all my tasting spoons at hand, hang my wedding ring on a small hook when rolling meatballs. Best thing is that since it's portable I can easily move my kitchen workspace with me on retreats (where I typically like to cook for people).
PS: This could also be used for example for a cocktail workstation!
PPS: Since this was the prototype it actually misses some features from the design I've uploaded to Printables. I've since made it customizable with parameters, added notches in the sides for easier grabbing & carrying.
PPPS: As a fun home automation thing I've stuck an NFC sticker between the right corner of the box's edge and the bin with the spoons. If I tap my phone against it, it starts playing some music and lights up the kitchen counter.
These small bowls are key to my kitchen. They're inexpensive, lightweight, fit easily in the washing machine and stack for easy storage. Super versatile things, I use them for making sauces & dressings, getting food out of the pan and moved to the side for later use, ice baths, or just storing leftovers or prepped items in the fridge.
PS: I also use these sheet-pan like oval shallow plates for the same purpose. They fit the dishwasher even better.
This is the next level organising nerd in me that pops up. I have a sole drawer dedicated to frequently used spices, all in the same IKEA jars and with the same Dille & Kamille labels. The drawer is right next to the stove so I don't have to reach to far if I feel like adding something in the heat of the moment.
I then keep the spice pantry & lesser-used spices in a cabinet on the other side of the kitchen. Doesn't eat away precious countertop space but they're there if I want to refill a spice container in the drawer. I use simple masking tape & plastic deli containers to remember what they are.
There was a time when I saved too much deli containers than I needed, but these are just so useful. Chinese takeout, supermarket soup and then those easy Tupperware like containers from IKEA & Hema, I save all of those to store leftovers in the fridge or freezer or put away some prepped items for a later dinner party.
Biggest trick here is to use a medium sized foldable crate (this one is from Hema) so you can easily grab all of the containers and take out the right ones. Then it doesn't get all messy in the kitchen drawer or cabinet.
I don't have the largest space near the sink here in the kitchen but having all cleaning items in a caddy keeps it tidy and maximizes the space for items that are drying. I'm a big fan of using the Scrub Mommy sponge that has two sides to thoroughly clean all the dirty dishes.
A simple steel wool brush helps me clean the kitchen sink if it has some residual oils or other dirt in it. I also designed a simple caddy for the two hemp sponges I use to wipe away dirt of the countertop (pictured on the right)
31.12.2025 13:48Moving efficiently through the kitchenHere's what I've liked reading this week, dive in!
🎸 Using Raycast AI to auto create song wishlists
I picked up writing an article again! Though it's a rather short read, it really helped solve an annoying problem for me as a music enthusiast with the help of Raycast & iOS.
💡 Ikea rolled out their new Matter smart home lights & sensors.
I personally can't wait to get my hands on them in Belgium soon. https://www.ikea.com/global/en/newsroom/retail/the-new-smart-home-from-ikea-matter-compatible-251106/
🟠 Mistral releases new frontier models & alternative to Claude code
They released a new Mistral 3 model and also made the edge model Ministral 3 publicly available. I've installed it via Ollama and will try chatting with it in Open WebUI to see the performance for smaller tasks & chats.
Vibe has been fun to experiment with, being able to use their cloud hosted version of Devstral 2 for free or link it up to a local model.
🇨🇳 Things keep in moving in China's AI space
DeepSeek released new versions of their open weight model. ByteDance tries to roll out wired-to-your-life AI in China.
🧭 Reads on engineering culture & principles
I rediscovered some bookmarked reads and new reads pointed out by coworkers on building an engineering culture, what principles bring clarity & direction to a team and how you innovate & evolve sustainably
🤝 Balancing trust & competence in the workplace
My good friend first, coworker second, Bart Schroyen sent me this article of how trust & competence tie into the workplace, building high performing teams.
As a music enthusiast, I enjoy exploring a wide range of genres and artists. I collect music for various reasons: records I plan to buy later, tracks I want to add to my Plex library (I’m a big Plexamp user!), or songs I can already imagine including in a DJ set. Since I prefer to own the digital files and store them in my Plex collection, I need a "wantlist" to refer to occasionally and archive into my library.
The challenge I’ve faced repeatedly is the lack of a simple way to combine all the different sources where I discover music into one list. My usual sources include YouTube, Shazam, Spotify, Instagram comments or posts, and random webpages.
In the past, I relied on Spotify, saving everything to a large "wantlist" playlist. However, this approach had limitations: some music wasn’t available on Spotify, or the versions differed. Switching between apps also made the process cumbersome.
I once attempted to write a Go exporter to aggregate lists from various services (YouTube playlists, Spotify playlists, my Shazam collection). But like many side projects, the effort quickly became overwhelming, and I abandoned it.
My best attempt so far at automating it was to take screenshots and store them in a specific Apple Notes folder. This worked well in some ways: I had everything in one folder, accessible from all my devices, and could screenshot any service on my phone. But there was a big downside: my camera roll filled up with screenshots, making it harder to find other photos when I needed them. It resulted in huge clutter when scrolling to find a certain photo memory.
Becoming to annoyed recently, I decided to spend some time on making it better. I wanted to use AI to identify the music and store the results in a plain text Apple Note. Since I lovingly use Raycast AI on my phone, I created a Shortcut tied to my Action button to discover and save music:
The process now works smoothly! Uploading the image takes a little time, but I’m usually not in a hurry when doing this. If I do want quicker results, I adapted a Shortcut I found online to first use OCR on the screenshot and send the extracted text as a plain text prompt to the LLM.
Here's a short screen recording of the Shortcut in action on a killer Rod Stewart song. It doesn't show the song being added to my note, but trust me it works 😉
Any questions? Like a copy of the Shortcut? Just send me a message and I'll help 👋
7.12.2025 17:16Saving discovered music using AIRushing this one a bit since I didn't really find the time to properly follow up and the list of interesting things I read sat idle for a while.
Still wanted to post about the following nice reads:
Short & sweet insight on using two high performing languages side by side at Tiktok
I saw this one pop up and hoping for a use case where I can try it out. Using the same CLI and API as local models installed via Ollama, you can now also offload the compute to their (supposedly) secure & private cloud platform.
What has always been implicit has now been made explicit: React Native is owned by the consortium of companies driving it for the last few years. Meta is taking a more formal/reviewing role and most notably Callstack, Expo, Software Mansion are driving development forward from now on.
I was super impressed by seeing Upgrade Estate dip their toes in the cloud & AI waters here in Belgium. They're quite the cool & innovative company, and seeing them think about building a competitor to cloud hyperscalers and the AI moguls, with a heightened focus on sustainability is something that I'm personally a very big fan of and curious to see how it further develops.
Learned about this by listening to the AI podcast from EPFL / Marcel Salathé. Though it's already a bit older I still found it very cool to see an LLM specifically trained for the medical space.
16.11.2025 15:36tl;dr — Week 47, 2025A nice service that I stumbled across. Subscribe to an online calendar that displays the weather forecast right in your calendar application of choice. Useful when scheduling sun-required events.
Why is no one talking about this?
— zack (in SF) (@zack_overflow) August 23, 2025
This is why I don't use an AI browser
You can literally get prompt injected and your bank account drained by doomscrolling on reddit: https://t.co/keiz7bL2XX pic.twitter.com/aGN8xrdZtD
OpenAI, Perplexity, Dia etc. are all focusing on transforming the browser as the next big gateway into AI for the common people. I can build directly upon last week's tldr on AI security, with Perplexity Comet proving the point to be weary of agentic AI.
This article was shared all over the internet the last weeks. How much cognitive power do we offload and are we willing to lose any?
Every time a major event or crisis occurred near the Pentagon, neighboring pizza places saw their orders increase. Over time journalists became aware of this and started observing the pizza shops in order to assess seriousness of ongoing events. A fun read, with its own history tracking website
This article is in Dutch
A highly resonating article. I often find myself wondering whether the tech sector doesn’t see consider itself too much larger than life. Are we losing the focus on delivering value over hype & self-credit?
9.9.2025 18:56tl;dr — Week 36, 2025Welcome to my latest experiment, a bi-weekly newsletter called Thib loved do read aka tl;dr
Anyone who really knows me knows I'm a lists guy. I make them for literally everything. And I'm also someone who can't help but share cool stuff I stumble across - a brilliant article, a new release of a piece of software I got excited about, a cool video I watched and learned a thing or two from, or just something that made me think "nice, that's clever."
So I've been thinking for a while now: why not combine these two things and share my lists?
I'll be hosting my own curated list newsletter that collects nice articles, product launches, release notes, random columns — basically all the stuff that I really liked reading and .
I aim to make it a bit of a personal experiment as well, where I will try to use an AI assistant to help me write the actual newsletter
For a while I've been toying with this idea to make an LLM aware of my voice of tone and writing style by feeding it my earlier written content, so that afterwards I can simply feed it my Karakeep list, notes of the week or maybe even Safari tabs on my phone and see if it can get the newsletter drafted for me.
Ideally I can just make micro-edits and post my ✨ editorial notes ✨ & thoughts. I'm not quite sure if I'll like it but let's see, the pivot to writing these myself is easy enough to make 😺
I won't be using AI right off the bat and will write them myself first. If I change that it will be indicated clearly in the email.
It lays out the basics of prompt injection but, more importantly, demonstrates once again that no company, big or small, is safe from what is essentially a very basic AI mind-trick.
I like the steps HA is taking to make AI feel more embedded in the platform, especially by focusing on local AI solutions like Ollama in their examples. With the introduction of AI Tasks, I am eager to get my hands on it and start thinking about and creating possible tasks such as weather reports, notifications, and schedule briefings using my locally hosted models.
Google introduced a new lightweight/edge model variant of Gemma 3. Don't expect it to be very good as an assistant or to answer questions correctly, but I'm looking forward to try it out in Home Assistant's AI tasks and see how it does, or finetune it on my own content 🤔. Given that it’s a 270-million-parameter model, it should run quickly enough locally on recent Mac hardware.
I have long been a fan of Mistral. Releasing their Document AI offering on Azure will surely open doors for companies that may be hesitant to send data to Mistral's own cloud (La Plateforme) but are looking to leverage AI-enhanced document workflows.
Wendover Productions' videos are always a great source of learning new things for me. In this video, he explains how AI consumes energy from the grid and what that means for households. It's a very American problem (for now), but it provides a real understanding of what it means for our global energy infrastructure.
I've written a short bit on this exact video on LinkedIn before, but when paired with the first article in this newsletter, it becomes highly relevant again. I felt it was important to include it again here.
I knew what NTP was, but not how it worked. This short 8 min video breaks down the different layers of time syncing (Stratum levels) and explains how all of the digital clocks on our laptops, phones, or connected watches try to stay in sync with the correct time.
25.8.2025 07:24tl;dr — Week 33, 2025I've been recently playing around with AI a lot. To get a solid grasp of the providers on the market, the performance of selfhosting, understanding the concepts of MCP, RAG, prompt engineering and so forth.
Another thing that's recently on my radar as well is consciously choosing for European technology. I believe with everything going on right now in the world and upcoming strong innovation happening in the EU tech scene this is the right bet to take.
So, let's combine those two and look into how you can integrate Mistral's new OCR feature (a European AI provider) together with LiteLLM (a selfhosted model gateway).
Combining OCR with LLMs, such as passing Mistral OCR results to Mistral Small, opens up interesting use cases.
For instance, you can extract text from receipts or long white paper PDFs and then analyze or generate insights using that LLM. This combination is particularly useful for automating data extraction, enhancing document processing workflows, and enabling advanced text analysis from visual content. I'll take you through some of the details on how to set up Mistral's relatively new OCR feature in LiteLLM.
Having LiteLLM is a real blessing because you can have a single gateway managing all your models (not clicking around on different websites or API endpoints) and get usage & billing insights per platform consuming your gateway (e.g Open WebUI, Bruno, your Open AI compatible app...)
To integrate Mistral OCR with LiteLLM, the first step is to configure a passthrough route in LiteLLM. This route will allow LiteLLM to communicate with the Mistral OCR service by just relaying the request directly. That means that LiteLLM does not actually transform or dictate any data but really acts as a proxy just 1:1 passing through the request to Mistral's endpoint.
This configuration cannot be done through the LiteLLM UI, you will need to modify the YAML configuration file directly. That's a bit of a bummer and I also don't understand why this isn't possible, but hey, might come in a later release.
In LiteLLM's config.yaml add the following pass_through_endpoints configuration under the general_settings:
general_settings:
pass_through_endpoints:
- path: "/mistral/v1/ocr"
target: "https://api.mistral.ai/v1/ocr"
headers:
Authorization: "bearer os.environ/MISTRAL_API_KEY"
content-type: application/json
accept: application/json
forward_headers: TrueWe don't want to hardcode our Mistral API key so passed it as an environment variable. Make sure that however you're running LiteLLM you've set the env var. I typically run everything in Docker Compose, and provide that as a litellm.env file to my Compose config's env_file directive:
DATABASE_URL="postgresql://llmproxy:*****@litellm_db:5432/litellm"
STORE_MODEL_IN_DB=True
MASTER_KEY="*******"
POSTGRES_DB=litellm
POSTGRES_USER=llmproxy
POSTGRES_PASSWORD=******
# Ollama
OLLAMA_API_BASE=http://******:11434
OLLAMA_API_KEY=""
+MISTRAL_API_KEY=********Once you have configured the passthrough route and restarted LiteLLM, you can start calling the Mistral OCR API through LiteLLM. Below are examples of how to make API calls.
To send an image for OCR processing, you can use the following curl command:
$ curl --request POST \
--url http://litellmhost.local:4000/mistral/v1/ocr \
--header 'authorization: Bearer LITELLM_API_KEY' \
--header 'content-type: application/json' \
--data '{
"model": "mistral-ocr-latest",
"document": {
"image_url": "https://raw.githubusercontent.com/mistralai/cookbook/refs/heads/main/mistral/ocr/receipt.png"
}
}'Example response:
{
"pages": [
{
"index": 0,
"markdown": "# PLACE FACE UP ON DASH <br> CITY OF PALO ALTO <br> NOT VALID FOR ONSTREET PARKING \n\nExpiration Date/Time 11:59 PM\n\nAUG 19, 2024\n\nPurchase Date/Time: 01:34pm Aug 19, 2024\nTotal Due: $\\$ 15.00$\nRate: Daily Parking\nTotal Paid: $\\$ 15.00$\nPmt Type: CC (Swipe)\nTicket \\#: 00005883\nS/N \\#: 520117260957\nSetting: Permit Machines\nMach Name: Civic Center\n\\#*****-1224, Visa\nDISPLAY FACE UP ON DASH\n\nPERMIT EXPIRES\nAT MIDNIGHT",
"images": [],
"dimensions": {
"dpi": 200,
"height": 3210,
"width": 1806
}
}
],
"model": "mistral-ocr-2503-completion",
"usage_info": {
"pages_processed": 1,
"doc_size_bytes": 3110191
}
}OCR'ing a PDF also also straight forward but uses another request body format. I've provided a screenshot here of Bruno that shows the request format & response to OCR my public resume:
While integrating Mistral OCR with LiteLLM offers several benefits (unified interface, scalability, single source of truth), there are still some areas that need improvement IMO to make this setup truly compelling. The most important one being cost monitoring.
Integrating Mistral OCR with LiteLLM offers a streamlined approach to managing OCR tasks within a unified AI model gateway.
While the current passthrough API setup has limitations, particularly in cost monitoring, the benefits of a centralized interface and enhanced functionality make it a valuable addition. Next steps would definitely be to look how you can integrate Mistral OCR with a Mistral model like Pixtral or Small to do actual processing.
I'm thinking of integrating those Mistral LLM models via LiteLLM to:
I invested quite a lot in Zigbee hardware over the years, resulting in all my lights being smart lights and having accompanying Zigbee remote controllers on the wall. Since I prefer a similar look and feel I have quite a lot of copies of the same model like IKEA's Strybar remote.
Once their battery dies, instead of instantly replacing the battery, I tend to resort to automations (like automatic presence lighting when entering a room), or switching the light on with the Home Assistant app, thus leaving the depleted device on the wall for a longer time.
It's only when a handful of them are depleted that I take them all upstairs to my desk, plug the rechargeable batteries into the charger and swap in new coin cell batteries. But this comes with a downfall. Here's how it looks after all batteries are refreshed. Have fun guessing where each remote went in the corresponding room:
In theory I could go to the room, press every remote until I find the right one, to the annoyance of my wife seeing lights magically turn on or off in random rooms she might be in. Or I might even change my habit of saving them up for a big battery reload/replace and do that adhoc when a battery dies.
But changing habits is hard and I resorted to a much simpler route: a Dymo label maker. Smack a label on it and never have to guess the device again! Two words of caution here though:
0x2B84) but this resets if you ever have to pair the device again. I opted for the IEEE Address which is the equivalent of a MAC address in Zigbee networking and is unique to the device without changing after a re-pair.Now the only parameter I should adjust over time is the device friendly name in Zigbee2MQTT / Home Assistant. If I enter the last four characters of the label in Zigbee2MQTT I'm immediately able to see where the device should go after a battery swap:





















