✍️Write rieview ✍️Rezension schreiben 🏷️Get Badge! 🏷️Abzeichen holen! ⚙️Edit entry ⚙️Eintrag bearbeiten 📰News 📰Neuigkeiten
Tags:
Link: https://latex2image.joeraut.com/
Simple website to turn a #latex formula into a PNG.
6.2.2025 07:38Link:Render Latex formulas as images★★★★★ Accelerando by Charles Stross
One of my favorite science fiction books. It plays in a world shortly before and after the AI singularity. I don't know how to describe it in a review without spoiling the journey, but I found many parallel to technologies and trends which should only emerge years after the publication of Accelerando.
4.2.2025 20:27Review: Accelerando by Charles StrossLink: https://www.youtube.com/watch?v=JxI3Eu5DPwE
Great talk about architectures for #gamedev. He also wrote a book about game programming patterns
31.1.2025 20:32Link:Bob Nystrom - Is There More to Game Architecture than ECS?Mit der anstehenden Bundestagswahl 2025 habe ich mich mal einen Abend hingesetzt um den Verlauf der Umfrageergebnisse zu visualisieren. Die Daten stammen von https://wahlrecht.de.
Punkte zeigen einzelne Erhebungen, die Linie ist der Durchschnitt der letzten 14 Befragungen. Die Daten aller Institute wurde genutzt. Bei der "Gesellschaft für Markt- und Sozialforschung" habe ich die Sonstigen und BSW verworfen, da diese zusammen in einer Spalte aufgelistet wurden.
#politik #dataviz #btw25 #data
29.1.2025 20:20Visualisierung: SonntagsfrageIch hatte zur letzten Bundestagwahl ein kleines Tool gebaut um die Themen in Wahlprogrammen zu durchsuchen und zu visualisieren.
Mit der anstehenden Wahl wurde es Zeit die neuen Programme einzupflegen. Derzeit sind die meisten Programme nur Entwürfe, lediglich die Union hat schon ein fertiges Wahlprogramm.
Ein kleine Anleitung findet ihr hier: https://blog.libove.org/posts/wahlprogramme/
Der Source Code ist auf GitHub verfügbar: https://github.com/H4kor/wahlprogramme
31.12.2024 10:39Update: Wahlprogramme ToolLink: https://ianjk.com/ecs-in-rust/
Guide on how to build a simple ECS in #rust.
29.12.2024 11:03Link:ECS in Rust#vegan Rouladen mit Rotkohl und Knödeln
24.12.2024 19:30Veganes WeihnachtsessenLink: https://brouter.de/brouter-web/
The best bicycle route planner I used so far.
16.9.2024 17:34Link:Navigation for BicycleLink: https://vonheikemen.github.io/devlog/tools/setup-nvim-lspconfig-plus-nvim-cmp/
I couldn't figure out how to properly configure autocomplete in #neovim. The "jump to next placeholder" was never working. This article finally solved my problem and it explains WHY you need all these different components.
10.9.2024 18:45Link:How to properly setup autocomplete in NeoVim
Servings: 1 großer Topf
,
Prep Time:
"Chili" sin Carne Rezept für große Mengen zum einfrieren.
I've worked on some smaller features and improvements for owl-blogs.
Main Features:
For development I took the time to setup the "end-to-end" tests using go test instead of the previous pytest setup. This vastly simplifies testing and its much quicker.
To test #ActivityPub functionality I use a small mock server, which content can be controlled during testing.
20.7.2024 18:43owl-blogs 0.3.2Reply to: https://indieweb.social/@OpenMentions/112773889161182871
I've had #IndieAuth implemented in the "v1" of my blog, but only used it for the the #IndieWeb wiki. Did not yet bother to reimplement in v2.
20.7.2024 17:08Re: IndieAuthThis guide is written fro gtk-rs and GTK 4.6
For my DungeonPlanner project I wanted to use custom icons for the tool buttons. Unfortunately the documentation for this is rather slim. As an additional constraint, the image data should be embedded in the binary as I like to compile and ship a single binary file (as long as this is possible).
The nearest approach I could find was to create buttons with downloaded images. The snippet is for GTK3 so it had to be adjusted for GTK4, but it contained the right function names to know what to search for :).
This is the final code I ended up with:
let button = Button::new();
let bytes = include_bytes!("../../assets/icons/add_chamber.png");
let g_bytes = glib::Bytes::from(&bytes.to_vec());
let stream = MemoryInputStream::from_bytes(&g_bytes);
let pixbuf = Pixbuf::from_stream(&stream, Cancellable::NONE).unwrap();
let texture = Texture::for_pixbuf(&pixbuf);
let image = Image::from_paintable(Some(&texture));
button.set_child(Some(&image));
We start by creating a button. Instead of using the ButtonBuilder as you would normally do, I'm just creating an "empty" button. It should be possible to still use the builder, as we are just replacing the child content at the end.
let button = Button::new();
Next we need to load our image data. As I want my images to be embedded in the binary I use the include_bytes! macro. The raw bytes are then turned into a glib:Bytes struct and finally into a MemoryInputStream. The stream is needed to parse the image data.
let bytes = include_bytes!("../../assets/icons/add_chamber.png");
let g_bytes = glib::Bytes::from(&bytes.to_vec());
let stream = MemoryInputStream::from_bytes(&g_bytes);
The next goal is to create an Image object containing our embedded image. With GTK 4.6 we could still use Image::from_pixbuf but this will be deprecated in GTK 4.12. Instead we have to do an extra step and create a Texture and use Image::from_paintable. The texture can simply be created from a Pixbuf, which is created by using the Pixbuf::from_stream function
let pixbuf = Pixbuf::from_stream(&stream, Cancellable::NONE).unwrap();
let texture = Texture::for_pixbuf(&pixbuf);
let image = Image::from_paintable(Some(&texture));
Finally we can set the child of our button to the image and our icon button is done. The same approach also works for ToggleButton.
button.set_child(Some(&image));
17.7.2024 18:54Rust GTK: Button with Custom Icon with gtk-rs using Embedded ImagesThe stated goal of OpenAI is "to ensure that artificial general intelligence benefits all of humanity". With the release of ChatGPT, they might have missed humanity’s only shot at creating an Artificial General Intelligence (AGI).
The performance ("intelligence") of large language models (LLMs) mainly depends on the scale of its training data and the size of the model [1]. To create a better LLM you need to increase its size, and train it on more data. The architecture and configuration of an LLM can almost be neglected in comparison.
However, the intelligence of a model does not grow linearly with its size [1]. There is a diminishing return on increasing the size of LLMs. If you increase the size of model and training set by 10x, you will only get an increase in performance as the previous 10x increase accomplished. This explains why vast amounts of data are required to effectively train large language model.
ChatGPT 3 was trained on 500 billion tokens [2]. There are no official numbers (that I could find) on how much data was used to train ChatGPT 4, but rumors in the AI community state that it was trained on 13 trillion tokens. With this numbers the performance step from 3 to 4 took an 26x increase in data.
The estimation for the size of publicly available, de duplicated data is 320 trillion tokens [4]. This is 24.6x more data than ChatGPT4 was (likely) trained on. If these numbers are correct, the performance of LLMs will only increase as much as it increased from ChatGPT3 to ChatGPT4 before we ran out of data.
I doubt that this will be enough to reach AGI level intelligence.
Now you might say "we produce more data every day, models can get better in the future". We could just wait some decades, train new models from time to time and see a gradual increase in performance. And one day we suddenly have an AGI. But the release of ChatGPT created a problem. It poisoned any data collected after November 30, 2022.
The release of ChatGPT was the wet dream of every spammer, bot operator, troll and wannabe influencer. Suddenly you could create seemingly high quality content, indistinguishable from human written text, at virtually zero cost. Ever since the internet gets filled with LLM generated content. And this is a problem for all future trainings.
Models that are trained on their own generations (or data created by other models) start to forget, there performance declines [3]. Therefore you have to avoid training on AI generated content, otherwise the increase in data may decrease your performance. As it is virtually impossible to clean a dataset from AI generated text, any data collected after Nov. 2022 should be avoided. Maybe you can still use a few more months or years of data, but at some point more data will hurt the model more than it helps.
The publicly available data that we got now is all we will get to train an AGI. If we need more data it will have to collected at an exorbitant price to ensure it's not poisoned by AI generated data.
The size of current training sets and potentially available data shows that we will not reach AGI levels with the current state of the art approaches. Due to data poisoning we will not get substantially more usable data. Thereby AGI is (currently) not achievable. Opening pandora's box of accessible generative AI may killed our chance of creating an artificial general intelligence.
If we want to build an AGI we will have to do it with the data we have now.
10.7.2024 20:12Missed Shot at Artificial General Intelligence
Servings: 1 Portion
,
Prep Time:
Dieses Rezept war ein bisschen frei Schnauze, ist aber wirklich gut geworden. Die Mengen sind im nachhinein geschätzt, also abschmecken und gegebenfalls anpassen.
#vegan #nudeln #udon #zucchini #karotten #szechuan #szechuanpfeffer
30.6.2024 09:19Szechuan Pfeffer Udon NudelnList of random tables I've created to quickly create shops or treasures when DMing.
#DnD #5e #random #generator #tabletop
29.6.2024 15:05Random Tables for DnD 5e
Servings: 3 Portionen
,
Prep Time:
Ein einfaches und schnelles Nudelgericht ideal für Kleinkinder. Schmeckt gut mit veganem Parmesan.
#vegan #rezept #nudeln #erbsen
23.6.2024 10:01Nudeln mit Erbsen
Servings: 4 Portionen
,
Prep Time:
Die vegane Version meines Lieblings-Nudelsalats.
#vegan #rezept #nudelsalat #einfach
20.6.2024 19:55Veganer Pesto Nudelsalat mit RucolaI recently added hashtag support to owl-blogs. The initial reason for this was to to make post more discoverable via ActivityPub, but I found it helpful to further categories posts. The implementation was quiet simple. I use the markdown renderer goldmark which has a plugin for hashtags.
As tags are also part of microformats2, I wanted to mark the hashtags accordingly. This is currently not possible with the hashtag extension.
I've extended this to allow adding arbitrary attributes to the link tag (Related Pull Request).
Until this is merged into the main repository I'll use my own version, which can be done by adding a replace directive to the go.mod
replace go.abhg.dev/goldmark/hashtag => github.com/H4kor/goldmark-hashtag v0.0.0-20240619193802-bec327f5be38
#go #dev #owlblogs #markdown #indieweb
19.6.2024 20:15Microformat Categories for Hashtags in owl-blogsLink: https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/
Great rant about #AI
19.6.2024 17:55Link: I Will Fucking Piledrive You If You Mention AI Again#emsflower #turtle #photography
18.6.2024 19:51TurtleLink: https://situational-awareness.ai/
Thesis arguing that #AGI will be reached in the next decade. Have not yet read the full text.
Problems I see with the argumentation so far:
The data wall is an (hard) open problem. Larger models either will need massively more data or be orders of magnitude more efficient with the given data.
More data is not available at that scale. The latest models are already trained on virtually the entire internet. They argue that a strategy similar to AlphaGo, where the model created it's own training data by playing against itself, could be deploy for #GenAI. I find this implausible as generating intelligent text beneficial in trying already requires a more capable AI.
Similarily being more efficient with data is still an open problem, but I don't know enough about this to evaluate how likely this is to happen.
Additionally, going forward, any new datasets will be poisoned by AI output as they are already used on massive scale to create "content". Research suggests that training on these data degrades the performance of models.
Even is the data wall is broken scaling the models will need massive computing power consuming ever more resources and energy. This will only be tolerated by the general public as long as this is overall beneficial to them. There are already industries (mainly creative ones) being massively disrupted by the current generation of GenAI. Philosophy Tube in Here's What Ethical AI Really Means, shows how strikes and collective action can be tools to prevent the development of more powerful AI systems.
18.6.2024 17:51Link:Situational Awareness - The Decade AheadReply to: https://floss.social/@carlschwan/109774012599031406
A nice and simple solution to add comments to your blog.
I've decided to go the bit more complicated route and added #ActivityPub support to my blog directly. Any interaction will show up below the posts. However this requires a backend and will not work with a static site generator.
16.6.2024 06:18Re: Commenting via MastodonI'm currently working on the thumbnail implementation for #owl-blogs and deployed the feature branch to my blog.
Thumbnails use the same file format as their parent files assuming the user already chose the best format for their images. The thumbnails are created with a width of 620px
, equal to the content width of the blogs main body. If the image is already small enough the image data is simply copied.
The URL of a thumbnail can simply be generated by replacing /media/
with /thumbnail/
.
I will still write some tests and see if any errors occur on my blog before merging this feature into the main branch.
10.6.2024 19:14Thumbnails for owl-blogsMy blog software (owl-blogs) uses a single SQLite database to store everything, including all files uploaded. I'm aware that storing large files in a relational database isn't best practice. It started out as a placeholder implementation, but I liked the idea to have a single file I can backup.
One reason against storing binary blobs in relational databases often stated is read performance, but I didn't find any benchmarks supporting this claim. Therefore I built a small test setup to see the difference between serving binary files out of a SQLite database vs serving from the file system directly.
As my blog is written in Go, I created the a simple server similar to my blog. It uses sqlx and go-sqlite3 for the database handling and net/http
for the static file server
package main
import (
"log"
"net/http"
"github.com/jmoiron/sqlx"
_ "github.com/mattn/go-sqlite3"
)
type sqlBinaryFile struct {
Data []byte `db:"data"`
}
type sqlHandler struct {
Db *sqlx.DB
}
func (h *sqlHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
id := r.PathValue("filename")
var sqlFile sqlBinaryFile
h.Db.Get(&sqlFile, "SELECT data FROM files WHERE id = ?", id)
w.Write(sqlFile.Data)
}
func main() {
db := sqlx.MustOpen("sqlite3", "files.db")
sql := &sqlHandler{Db: db}
fs := http.StripPrefix("/dir/", http.FileServer(http.Dir("./static")))
http.Handle("/dir/", fs)
http.Handle("/sqlite/{filename}", sql)
log.Print("Listening on :3000...")
err := http.ListenAndServe(":3000", nil)
if err != nil {
log.Fatal(err)
}
}
As a test set I created 2000 files between 200kb and 4MB in size using a simple python script:
import os
import random
for i in range(2000):
os.system(f"head -c {(random.randint(200, 4000))}K </dev/urandom > static/{i:05d}.bin")
The SQLite database was created with this script:
import os
import sqlite3
os.remove("files.db")
con = sqlite3.connect("files.db")
cur = con.cursor()
cur.execute("CREATE TABLE files( id VARCHAR(255) PRIMARY KEY, data BLOB NOT NULL )")
for f in os.listdir("static"):
print(f)
data = open("static/" + f, "rb").read()
cur.execute("INSERT INTO files(id, data) VALUES (?, ?)", (f, data))
con.commit()
To benchmark the server I created two files listing all file URLs (one for sqlite, one fot filesystem) and used siege to run the benchmark with this configuration.
siege -f urls_sqlite.txt -c 1 -b --time=10s -j
The test was executed on my laptop:
I ran the test with different concurrency and plotted the results:
For a low throughput system (such as my blog) the difference between SQLite and the filesystem is small enough to not care about. The possible throughput (transaction/second) of the filesystem is ~2.3 times higher. The response time grows slower with increased concurrency.
For the time being I will stick with my SQLite solution. Once my blog gets really popular I can easily change the implementation of the binary repository.
#hosting #sqlite #go #benchmark #server
6.6.2024 19:56Serving Binary Files from SQLiteReply to: https://chaos.social/@isotopp/112517363217825291
How many lightbulbs can I control with a single CPU?
1.6.2024 16:32Re: Idea for BlogpostBlack and white profile picture of an alpaca. Its hair partially covers its eyes.
31.5.2024 18:29PunkReply to: https://jamesg.blog/2024/05/29/nanosearch/
This looks great! I've been playing with the idea of creating a search engine for a while now. Maybe this will help to overcome my procrastination :D
31.5.2024 17:58Re: NanosearchI've implemented a basic ActivityPub support into owl-blogs.
Feel free to check out version 0.2.0. Feedback is always appreciated :)
31.5.2024 17:38Owl-blogs goes ActivityPubShot at #Emsflower.
30.5.2024 19:34BirdShot at #Emsflower
29.5.2024 19:21Blue ButteflyShot during day trip to #Emsflower
29.5.2024 19:20CaterpillarShot during day trip to #Emsflower
29.5.2024 19:15Artificial Flowers
Servings: 4 Portionen
,
Prep Time:
Eine einfache, vegane und nicht scharfe Erdnusssauce. Ich habe das klassische Sambal Oelek (oder Chilisauce) mit Ajvar ausgetauscht, um das Gericht für Kinder anzupassen.
Ich benutze die Sojaschnetzel von Vantastic als Fleischersatz. Alternativ kann auch fester Tofu gepresst und in Stärke gewendet benutzt werden. Oder einfach ganz weglassen, funktioniert trotzdem :).
Serviert wird das Gericht entweder mit Reis oder Udon Nudeln.
Eine wichtige Lektion die ich gelernt habe: wenn die Sauce zu dünn wird nicht mit Stärke andicken. Die Stärke zieht sämtlichen Geschmack aus der Sauce.
#vegan #rezept #erdnuss #brokkoli
19.5.2024 19:59Vegane Erdnuss Brokkoli PfanneOwl-blogs has reached a certain level of maturity and I think it is stable enough to be used by other people. As I have a lot of articles on my blog any future changes have to be compatible or need an automated adjustment, even if I'm the only person using the software.
If you like to try out owl-blogs or even contribute to the project, the main location of the code is now on GitHub.
18.5.2024 20:57Moved owl-blogs to GitHubEin Rotkehlchen sitzt auf einer verzierten Stange
18.5.2024 14:26RotkehlchenA frog sitting in a pond and croaking
12.5.2024 09:53Frog 2I'm currently rebuilding the design of my blog and want to use a complementary color palette. As I'm not yet sure which colors I will use it would be ideal to automatically derive the secondary color from the primary color.
In the future this can be done using the relative color feature coming to CSS, but this isn't yet widely supported.
--primary: hsl(170, 66%, 28%);
--secondary: hsl(from var(--primary) calc(h + 180) s l);
I figured out that you can achieve the same effect using the color-mix. This is already supported by all major browsers.
--primary: hsl(170, 66%, 28%);
--secondary: color-mix(in hsl longer hue, var(--primary), var(--primary) 50%);
The command mixes the primary color with itself in the HSL color space. Additionally it is specified that it should use the longer pass along the hue axis. As this will always be the 360° path, a mix of 50% will end up at the 180° complementary color to the primary color. The saturation and lightness will stay the same.
This allows for fast testing of colors using the developer console:
This can also be used to create other color palettes by changing the percentage accordingly.
10.5.2024 17:54Calculating the complementary color in CSSFor visualizing the heating and cooling degree days of Germany I had to learn how to plot maps.
My data source was a bunch of CSV files, which I combined into a single DataFrame, including latitude and longitude columns. I've used geopandas to plot this data for a first visualization. The DataFrame can be turned into a GeoDataFrame to create geometry information from the longitude and latitude columns. It's important to specify the right coordinate reference system (CRS) of your data.
With this conversion the data can be plotted using the plot
function of geopandas
df = load_my_dataframe(...)
gdf = gpd.GeoDataFrame(
df, geometry=gpd.points_from_xy(df["Longitude"], df["Latitude"]), crs="EPSG:4326"
)
ax = gdf.plot(
column="ValueToVisualize",
)
The shape of Germany is already recognizable, but some more context would be beneficial. For this reason I've added the borders of the German states to the plot. The shape files of all European countries can be downloaded from eurostat in various formats. I've used the "Polygon (RG)" version and the .shp format. The files contain the borders on different levels (country, state and regional borders). With a few lines the borders can be added to the plot.
border_file = "NUTS_RG_01M_2021_3035.shp/NUTS_RG_01M_2021_3035.shp"
eu_gdf = gpd.read_file(border_file)
eu_gdf.crs = "EPSG:3035"
gdf_de = eu_gdf[(eu_gdf.CNTR_CODE == "DE") & (eu_gdf.LEVL_CODE == 1)]
gdf_de.to_crs("EPSG:4326").boundary.plot(ax=ax, color="black")
Using points to visualize this data is not the best approach. Instead I want to fill the entire map where each pixel is colored according to the nearest data point. This can be achieved by computing a Voronoi Diagram, where each data point is turned into a face. I've tried to use the geoplot library to compute the voronoi diagram, but it got stuck on my data. Luckily scipy also has a voronoi implementation, but it's a bit more work to use.
# create a box around all data
min_lat = df["Latitude"].min() - 1
max_lat = df["Latitude"].max() + 1
min_lon = df["Longitude"].min() - 1
max_lon = df["Longitude"].max() + 1
boundarycoords = np.array(
[
[min_lat, min_lon],
[min_lat, max_lon],
[max_lat, min_lon],
[max_lat, max_lon],
]
)
# convert our data point coordinates to a numpy array
coords = df[["Latitude", "Longitude"]].to_numpy()
all_coords = np.concatenate((coords, boundarycoords))
# compute voronoi
vor = scipy.spatial.Voronoi(points=all_coords)
# construct geometry from voronoi
polygons = [
shapely.geometry.Polygon(vor.vertices[vor.regions[line]])
for line in vor.point_region
if -1 not in vor.regions[line]
]
voronois = gpd.GeoDataFrame(geometry=gpd.GeoSeries(polygons), crs="EPSG:4326")
# create dataframe
gdf = gpd.GeoDataFrame(
df.reset_index(),
geometry=voronois.geometry,
crs="EPSG:4326",
)
# plot borders
gdf_de.to_crs("EPSG:4326").boundary.plot(ax=ax, color="black")
# clip geometry to inside of map
clip = eu_gdf[(eu_gdf.CNTR_CODE == "DE") & (eu_gdf.LEVL_CODE == 0)]
gdf = gdf.clip(clip.to_crs("EPSG:4326"))
# plot data
gdf.plot(
ax=ax,
column="Monatsgradtage",
cmap="Blues",
legend=True,
)
The plot is almost done, I only want to clean it up a bit. As the axis don't serve any purpose here, I removed them. Additionally I've added a title and a small text to the bottom to include the source in the graphic.
ax.set_title(f"Heizgradtage 2020", fontsize=20)
ax.axis("off")
ax.text(
0.01,
0.01,
"Quelle: https://www.dwd.de/DE/leistungen/gtz_kostenfrei/gtz_kostenfrei.html\nVisualisierung: Niko Abeler CC-BY-SA 4.0",
ha="left",
va="top",
transform=ax.transAxes,
alpha=0.5,
fontsize=8,
)
23.4.2024 18:18Visualizing Data on Maps using matplotlib and geopandasHeizgradtage werden im Energiemanagement benutzt. Um den Heizbedarf über mehrere Jahre zu vergleichen muss der Effekt des Wetters herausgerechnet werden. Ansonsten vergleicht man jediglich das Wetter, da in kalten Wintern mehr geheizt werden muss. Dazu zählt man die Grad Celsius unter einem Schwellwert (meist 15°C in Deutschland).
Als Beispiel: Ein Tag mit konstanter Temperatur von 5°C ergibt 10 Heizgradtage ((15°C-5°C) * 1 Tag
). Ein Monat mit 10°C ergibt 300 Heizgradtage ((15°C-5°C) * 30 Tage
)
Die Visualisierung zeigt die Heizgradtage der Jahre 2010 - 2023 in Deutschland. Je kälter (mehr Heizgradtage) es in einer Region ist, desto dunkler wird diese dargestellt. Die Visualisierung basiert auf den Daten des DWD.
Parallel zu den Heizgradtagen gibt es auch die Kühlgradstunden. Hier werden die Stunden über einem Schwellwert (hier 18°C) gezählt. Eine Stunde mit 30°C ergibt somit 12 Kühlgradstunden. In dieser Visualisierung werden heiße Regionen in dunklerot dargestellt.
22.4.2024 20:14Visualisierung: Heizgradtage und Kühlgradstunden DeutschlandI've open sourced fedi-games, as previously mentioned here. The code is still quiet rough as this is a learning project for me. But maybe someone finds this useful to implement their own ActivityPub server. Or maybe someone wants to build some new games :)
22.3.2024 18:07Open Sourced: Fedi-GamesIn order to understand the ActivityPub protocol (at some point I want to make my blog available via ActivityPub) I've created a minimal server hosting game services for the Fediverse.
The service is currently hosted on games.rerere.org and provides three games:
The games can be played by mentioning the service and and your opponent in a note. I've mainly tested these with Mastodon but they should also work with other fediverse apps.
I want to release the source code soon, but it still needs some clean up, documentation and testing to be useful to anyone. The server is written in go and if you want to build something similar I've used these libraries to help with the ActivityPub implementation:
20.3.2024 20:27Fediverse GamesGo isn't an object oriented language and doesn't have inheritance.
Sometimes I still want to model something in a similar way as I would in an object oriented languages.
My blog software has different types of "entries", e.g. articles, images, recipes ...
This is modeled by an interface Entry
which has to be implemented by each type of post.
// truncated for simplicity
type Entry interface {
ID() string
Content() EntryContent
PublishedAt() *time.Time
Title() string
SetID(id string)
SetPublishedAt(publishedAt *time.Time)
}
One problem with this is, that I have to implement these functions for each type of entry.
Many of these implementation will be identical leading to a high degree of code duplication.
To avoid this I've created an EntryBase
, which implements sensible defaults.
type EntryBase struct {
id string
publishedAt *time.Time
}
func (e *EntryBase) ID() string {
return e.id
}
func (e *EntryBase) PublishedAt() *time.Time {
return e.publishedAt
}
func (e *EntryBase) SetID(id string) {
e.id = id
}
func (e *EntryBase) SetPublishedAt(publishedAt *time.Time) {
e.publishedAt = publishedAt
}
With this "base class" the specific implementations only have to implement functions which differ between entry types.
type Article struct {
EntryBase
meta ArticleMetaData
}
type ArticleMetaData struct {
Title string
Content string
}
func (e *Article) Title() string {
return e.meta.Title
}
func (e *Article) Content() model.EntryContent {
// render content from markdown
}
24.2.2024 21:18Inheritance Pattern in GoLink: https://jeffhuang.com/productivity_text_file/
https://www.jeffgeerling.com/blog/2024/my-todo-list-txt-file-on-desktop
Discussion: https://news.ycombinator.com/item?id=39432876
23.2.2024 11:07Link:TXT as TODO/Organizer
Servings: 3 Portionen
,
Prep Time:
#vegan #kichererbsen #pilze #rezept
5.1.2024 18:40Pilz Kichererbsen RisottoThis guide is written for gtk-rs and GTK 4.6
To create a menu bar in GTK4 we need a MenuModel
, from which the bar is created.
The MenuModel
can be derived from a Menu
object.
Menu
objects can have sub menues and and MenuItems
as children, buidling a tree structure which represents the entire menu of the application.
Let's start at the end of our menu definition, by defining the root of the menu tree. The root should only have sub menues as children.
let menu = Menu::new();
menu.insert_submenu(0, Some("File"), &file_menu);
menu.insert_submenu(1, Some("Edit"), &edit_menu);
let menu_model: MenuModel = menu.into();
This is our root menu and derived model.
The first parameter determines the ordering of the menu entries.
The second parameter defines the name displayed in the menu bar and the last parameter is a reference to another Menu
object.
In this example menu has two children "File" and "Edit" which are further menus we have to define.
The resulting menu bar will look similar to this:
The menus are Menu
objects.
To add selectable entries, MenuItems are inserted.
The first parameter defines the ordering within the menu.
I tend to leave gaps in my ordering to allow insertions later, without changing all order numbers.
The MenuItem
takes two (optional) strings as parameters.
The first string is the string shown in the menu. and the latter references an action.
This action is executed if the menu item is selected.
let file_menu = Menu::new();
file_menu.insert_item(0, &MenuItem::new(Some("New Dungeon"), Some("file.new")));
file_menu.insert_item(5, &MenuItem::new(Some("Open ..."), Some("file.open")));
file_menu.insert_item(10, &MenuItem::new(Some("Save ..."), Some("file.save")));
The resulting menu looks similar to this:
(Note that the shown shortcuts are automatically added when an accelarator is defined for the action).
To define the effect of selecting a menu item, actions have to be added to the window. I insert actions into action groups to organize them by their main menu. This code defines two actions in the "file" action group; "file.new" and "file.open". The name of the action group is defined when the group is added to the window.
let file_actions = SimpleActionGroup::new();
let action_file_new = ActionEntry::builder("new")
.activate(
move |_group: &SimpleActionGroup, _, _| {
// Define behavior here
},
)
.build();
let action_file_open = ActionEntry::builder("open")
.activate(
move |_group: &SimpleActionGroup, _, _| {
// Define behavior here
},
)
.build();
file_actions.add_action_entries([
action_file_new,
action_file_open,
]);
window.insert_action_group("file", Some(&file_actions));
For further organization of menu entries, sub menues can be used. This uses the same method as used on the root menu.
edit_menu.insert_submenu(20, Some("Change Mode"), &mode_menu);
This results in a sub menu similar to this
After defining the menu and deriving the MenuModel
a PopoverMenuBar
can be created and added to the application window.
This can be achieved by wrapping the content of the window in another box and adding the menu bar to this box.
let menubar = PopoverMenuBar::from_model(Some(&menu_model));
let window_box = gtk::Box::builder()
.orientation(gtk::Orientation::Vertical)
.build();
window_box.append(&menubar);
window_box.append(&main_box);
// Create a window
let window = ApplicationWindow::builder()
.application(app)
.child(&window_box)
...
.build():
The full code used in these examples can be found in the DungeonPlanner Source Code
21.12.2023 14:43Rust GTK: Creating a Menu Bar Programmatically with gtk-rsExample Dungeon built with DungeonPlanner ( also see previous post)
(All texts are generated with ChatGPT)
I've release a first "pre-version" and the source code of DungeonPlanner on GitHub.
DungeonPlanner is a small and simple tool I use to create and plan dungeons for tabletop games. It is game system agnostic and should be applicable to most tabletop games.
Feedback is appreciated as always :)
19.12.2023 20:16Dungeon Planner - First ReleaseFor the app I'm currently working on I need to react to mouse inputs in a drawing area. I need the position of the mouse and the possibility to react to mouse clicks.
The mouse position can be obtained by using an EventControllerMotion. This is added to the DrawingArea.
let pos_controller = EventControllerMotion::new();
pos_controller.connect_motion(move |_con, x, y| {
...
});
drawing_area.add_controller(pos_controller);
For each mouse button you want to "observe" a GestureClick is created and added to the DrawingArea.
let gesture_click = GestureClick::builder()
.button(GDK_BUTTON_PRIMARY as u32)
.build();
gesture_click.connect_pressed(move |_, _, _, _| {
// create to mouse click
});
drawing_area.add_controller(gesture_click);
15.11.2023 19:33Rust GTK4 - Mouse Events in DrawingAreaI'm currently building an application using gtk-rs using GTK4. To react to key presses the EventControllerKey can be used. This is added as a controller to the main window.
let control_key = gtk::EventControllerKey::new();
control_key.connect_key_pressed(move |_, key, _, _| {
match key {
gdk::Key::Right => { ... },
gdk::Key::Left => { ... },
gdk::Key::Up => { ... },
gdk::Key::Down => { ... },
_ => (),
}
glib::Propagation::Proceed
});
window.add_controller(control_key);
15.11.2023 19:26Rust GTK4 - Key Pressed
Prep Time:
Alle Zutaten in einen Standmixer geben und solange zerkleinern bisam ein feines Pulver hat. Sobald die richtige Konsistenz erreicht ist, "fließt" der Parmesan nicht mehr richtig beim mixen und leicht klumpig wird.
10.9.2023 09:54Veganer ParmesanAfter seeing how I put a child barrier together my daughter is obsessed with screws. She will try to turn any screw she can find, using any item barely resembling a screwdriver. Naturally, she got her own set of tools :).
As the set didn't include any screws I've decided to print some for her. The screws are compatible with the wrench and screwdriver of the "klein Bosch Work-Box". The holes of the block are loose enough to easily screw in the bolts.
I've used Blender to create these. For my first try I've created my own screw using the "Screw" modifier applied to a triangle. This worked okay, but the resulting screw was too loose and would just slip into the hole. For the next try I used the plugin "Bolt Factory" to create the screw part and only kept my previously designed head.
Things I've learned:
All files can be used under the CC-BY 4.0 License.
4.9.2023 17:303D Printed Toy ScrewsAt my hobby project Podcast de facto Standard I ran into problems querying the required data to render the report pages. My first fix for this was to query and process all data (for all reports) once per day and to store the results as one big JSON object in an additional table. This worked well until the server was running out of memory because too much data had to be loaded from the database to be processed in python.
I generally try to avoid database specific feature for as long as possible in my projects. Ideally I'm able to use SQLite in local development and automated testing and only start using Postgres in the testing and production environment. This works great with abstraction layers such as SQLAlchemy and Django and allows me to postpone the decision for a specific database system for as long as possible. As a nice bonus it keeps the testing setup very simple as I can simple use an in memory SQLite.
The growing memory requirements were the point where I had to choose a specific database. As I use PostgreSQL virtually always in production systems I looked into the options it provided to optimize my queries.
The main problems where cause by the episode data, a ~9 GB table with +11 million rows. I already had indexes in place but the queries for the audio properties of podcasts still took +1 minute per plot. This meant that "live" queries where no longer possible and I needed a solution to store (partial) results.
I've chosen to use materialized views as they are a great fit for my problems. The data presented on the page doesn't have to be updated with every newly analyzed audio file. It is sufficient to get an update once per day (even once per week would be fine). The transformations required can easily be done in SQL.
Materialized views behave almost like a regular table, but you cannot directly create or update rows in the table. Instead the content is created from an SQL statement, executed once on creation or upon refreshing the view.
Creating a materialized view is simple:
create materialized view pub_hist
select date(publishing_date), count(*)
from episode e
group by date(publishing_date)
This command executes the selection query and stores the result in the view pub_hist
.
Afterwards you can simply query from this view:
select *
from pub_hist
where "date" < now()
Other than normal views the content of the materialized view is not updated automatically. Instead you have to trigger a refresh, which will reevaluate the defined query and store the result:
REFRESH MATERIALIZED VIEW pub_hist;
To refresh the views I simply run a daily (celery) job, which calls the refresh command for all views.
12.8.2023 20:52Materialized Views in PostgreSQL
Servings: Für 4 Pizzen
,
Prep Time:
Servings: Für eine Pizza
,
Prep Time:
All Zutaten zusammengeben, gut mischen und direkt auf die Pizza. So einfach!
Wer mag kann auch noch etwas Knoblauchpulver geben. Für eine pikante Soße einfach ein bisschen Chili aus der Mühle dazu.
12.8.2023 19:26Schnelle Pizzasauce
Servings: 2 servings
Servings: 2-3 Servings
,
Prep Time:
Origin: https://www.rewe.de/rezepte/veganes-beluga-linsen-curry/
Servings: 2 Pizzas / 1 Blech
,
Prep Time:
Counting web page visitors accurately can be a challenging task, often bordering on the impossible.
There are two common methods to count visitors: extract the number from server logs or using an analytics software with an HTML snippet. Both approaches have limitations which will result in distorted numbers.
Using analytics snippets tends to result in an underestimation of visitor counts. A significant portion of internet users have an AdBlocker, leading to their exclusion from the count. Additionally, certain users might disable JavaScript or refrain from loading resources from external domains, further contributing to the under counting issue.
On the other hand, relying solely on server logs tends to overstate the number of visitors. This is because a significant portion of server logs comprises entries generated by bots and crawlers, which should ideally be filtered out to arrive at an accurate visitor count. Yet, many bots attempt to conceal their identity, making this filtering process challenging. Furthermore, some legitimate visitors might be counted multiple times, for example when they switch networks and acquire new IP addresses.
The true number of visitors will lie somewhere between the value produced by your analytics snippet and a number calculated from server logs. Unfortunately there is no way to retrieve the "true" number of visitors.
In conclusion: There is no true visitor number, it always depends on how you count and your definition of a visitor.
29.7.2023 20:00Thoughts on AnalyticsI built a website and crawler to analyze the podcast ecosystem. The website contains various reports about the usage feed tags and audio properties.
For the information about tag usage, I used defusedxml and written some basic validators to check that the tags included are also valid according to there specification.
The audio analysis is done using ffmpeg and ffprobe. I only extract basic features provided by these tools.
I hope this information is helpful to the podcasting community and people building their own podcasting system.
27.7.2023 19:06Podcast de facto StandardUpgraded my blog to 2.0 of Owl Blogs. This is a 100% rewrite of my blog software. Some features, such as webmention, are not ported yet, but this new implementation allows me to be more flexible with features.
My next step will most likely be POSSE as I often post photo to multiple sites manually.
19.7.2023 19:39Now running on Owl Blogs 2.0