Eric McCarthy’s projects, writing, videos, and photos. If I make it for myself, I put it here.
✍️Write rieview ✍️Rezension schreiben 🏷️Get Badge! 🏷️Abzeichen holen! ⚙️Edit entry ⚙️Eintrag bearbeiten 📰News 📰Neuigkeiten
One thing I suspect partisans like myself underappreciate is the extent to which members of Congress are driven by the concerns their constituents choose to share. Back in my days on Twitter, I followed someone who used to work as a congressional staffer and this was their common refrain.
I used to write my congressperson occasionally, but it has been well over a decade since I have. Part of the reason for this has been a creeping cynicism that this doesn’t do much good. But I think I’ve been getting this wrong. So yesterday I wrote the following to my Republican congressman, Juan Ciscomani:
Dear Representative Ciscomani,
I am writing to express my deep concern over the recent events unfolding in the Executive Branch, particularly those in which involve the Department of Government Efficiency (DOGE) and its leader, Elon Musk.
As a professional software engineer who was on Twitter during Musk’s takeover, I watched in disbelief as Musk publicly ridiculed and derided the engineering work of his new employees. While many of the layoffs were in no doubt necessary for Twitter to remain profitable after the buyout, Musk could have chosen to treat those who had to be let go with the respect they deserved, given their contributions to making and keeping Twitter a company worth $44 billion. Instead Musk chose a chaotic, mean spirited, and reckless path. By all accounts, Twitter (now X) is greatly devalued with advertisers increasingly avoiding engaging in business with it.
This very public display of disrespect was a turning point of opinion for many experienced software engineers of Musk. It seems he now surrounds himself with young and inexperienced acolytes — perhaps these are the only kind of people who do not challenge him. Sadly, it seems he has brought these people into DOGE and installed them in places where their inexperience and past associations with online gangs such as “The Com” could put government computer systems in danger of compromise.
Musk’s vendetta against the United States Agency for International Development (USAID) is another area where his capriciousness threatens the security of the United States. By disrupting and potentially completely stopping aid programs the standing of the U.S. will worsen and China will step in and fill the void.
I do not believe most of those who voted for Trump in Arizona’s 6th Congressional District had this sort of mayhem and disregard for security in mind when they cast their votes. My hope is that you will sponsor congressional action to rein in Musk and DOGE.
Sincerely,
Eric McCarthy
I’ll be resending this to my Democratic senators as well.
My approach to writing these letters is to attempt to persuade without being overly emotive and establish my credibility. I like to make it clear what my area of expertise is and how it is relevant to my concern. Keeping it to the facts and tying those facts to my concern make it more difficult for my concern to be dismissed.
I also don’t like to bring up my party affiliation or whether or not I voted for them. I’m their constituent regardless. It’s tempting to say that my vote for them is contingent on them taking action to address my concern, but I think leaving the door open to the possibility that I am a swing voter without ever explicitly mentioning it is more than enough.
One thing I did not do a good job of in this letter is being specific as to what action I’d like to see. When it comes to votes on legislation this is pretty straightforward. But in this instance I really don’t know what a good course of action for my Republican congressman would be. It may not be something specifically legislative in nature. Even just talking to a journalist and saying that he has reservations about Musk and DOGE would be welcome as it would open the doors for more Republicans to do the same.
If you want to write to your members of Congress, enter your address on the Find Your Members form of congress.gov. The “Contact” link should go directly to a web form to submit your note. My understanding is that this is generally the preferred way to express written concerns — snail mail requires a longer processing time. I’m not sure if by phone is any better or worse, but I’m a much better writer than I am a phone-talker so I just go with that.
9.2.2025 21:35I Wrote To My RepresentativeIn October Rebecca and I visited my parents on Long Island and took a week to drive around to various parks in upstate New York. It was a great trip! We took over 3 hours of video which I edited down to 25 minutes to create the above travelogue.
At some point in the future I’ll get around to making linkable chapters in videos, but here’s what you can expect if you watch the entire thing:
Most of this was shot on my iPhone 16 Pro in 4K or my GoPro Hero 12 in 4K, with some HD footage from my phone or Rebecca’s iPhone 14 Pro. I edited it in Final Cut Pro and got it ready for hosting here with the help of my media tools.
Not included in this video is our exploration of Ithaca and Rochester. We spent a good amount of time wandering around my alma mater, University of Rochester, and the Eastman Museum.
I still need to go through the photos and post more here, though you can see some from Letchworth that I used for The Rebound.
28.12.2024 06:05New York 2024This used to be a proper website.
It had a blog where I would write on politics. It had a page for a small bit of open source software I maintained. At one point it even had a photo gallery.
Over the years it moved from Movable Type, to Wordpress, to some custom PHP. And then bit by bit it became a chore to maintain. “Platforms” like Twitter and Facebook grabbed my attention and made posting easy.
So limulus.net languished. I deleted it piece by piece. Until all that remained was a single little homepage with some hand-written HTML. I even 410ed my blog.
The Laurentide Ice Sheet once covered much of North America. Over tens of thousands of years it accumulated frozen water, spreading further south. It didn’t simply cover land as its glaciers rolled forward — it scraped, crushed, and ground the earth under enormous pressure. It and its sibling ice sheets of the period sequestered so much of the planet’s water that sea levels fell 120 meters.
The reality of the online world that we built didn’t really make itself fully apparent to me until Elon Musk’s purchase of Twitter. The company was flawed before his ownership but his gross public treatment of Twitter’s engineers — now his own employees! — prompted me to swear to myself that I would never work for a company of his nor would I ever purchase a product from one of his companies given a choice.
Prior to this — like many others — I had scaled back my usage of Facebook after the 2016 U.S. Presidential election. Seeing the under-informed and often outright racist opinions of distant family members and high school friends put me off using it. At some point Facebook started down-ranking political posts in their feed algorithm both to avoid a perception of political bias but also because it was clearly turning off users. Paradoxically this decreased Facebook’s relevancy to me — what good is it if my posts won’t be seen by people?
I stayed on Twitter for a number of months after the purchase, but when it became apparent that my values could not abide staying on Twitter I began my exit to decentralized social media. I’ll be honest, I was surprised such a thing existed or even could exist. My assumption had always been that social media at a large scale would require centralization. But I was wrong. It turned out this entire time folks had been working on a web standard to make decentralization work and open-source social media server software like Mastodon was proving it out.
Getting on Mastodon awakened in me a renewed enthusiasm for the web and having a place of my own. After a too-brief period on an instance named mastodon.lol, I migrated my account to my own Mastodon instance at mastodon.limulus.net. (Go follow me at eric@limulus.net!)
As Earth started to warm about 20,000 years ago the Laurentide Ice Sheet began its recession. But it was not a linear process. Some 14,000 years ago a glacier flowed south from current-day Rochester, New York. It covered an area almost as far south as the Pennsylvania state line. The glaciers were not yet done transforming the land.
As I write this it is a little over a week after Donald Trump’s second presidential election win. It seems I am not alone in looking at how things have turned out and being dismayed at the outsized influence billionaires have on us. Trump himself is a billionaire. Musk became one of Trump’s largest contributors and has been warping Twitter further and further right-wing and away from free-speech principles. Peter Thiel, a billionaire early investor of Meta, believes that freedom and democracy are incompatible, and is also an early contributor to Vice President-elect J.D. Vance. Rupert Murdoch is a billionaire that owns a right-wing entertainment machine that has done more to create disturbing information silos than any other entity.
I doubt these people get together to conspire. Surely their egos can only stand each other’s company for a short time. But they all individually want the same thing: to further cement their status as oligarchs. The only thing that stands in their way is a well informed populace with an agreement on basic facts — so they seek to destroy even that.
Finally the glaciers receded. The land, no longer being crushed by heavy ice sheets, rebounded. New valleys were revealed. The Genesee River — which flows north towards Rochester and drains into Lake Ontario — found a new route. Over thousands of years this new route carved Letchworth Gorge, home to a number of impressive waterfalls. This is still new territory for our own dear Genesee.
What can I do in the face of this threat to my values? Well, there is only so much any one of us can do to solve the problems of the world. But one small thing I think I can do is rebuild this space and start regularly creating things to put on it. My hope here is to help find ways to make “platforms” less necessary, while also promoting ideas like subscribing to web feeds instead of following on LinkedIn, subscribing to a YouTube channel, or subscribing to a Substack. Part of this will — somewhat ironically — involve posting limulus.net links to some of these platforms.
This is not an approach that I came up with, nor is it even a new idea. It’s known as POSSE: “Publish (on your) Own Site, Syndicate Elsewhere.” Shamefully, I have only recently come to know about the IndieWeb movement that has been championing this approach.
A more technical article about how I am building this site with Eleventy and other tools will inevitably come later. But working on this article and working through some of the remaining technical challenges has helped alleviate some of my election-outcome despair. I am back to feeling like there is some hope. A lot of that hope is coming from the potential the web has to make a rebound after a capitalism-induced ice-age.
24.11.2024 18:35The ReboundI’ve previously mentioned my intention to eventually switch from targeting JavaScript to targeting WebAssembly. Well, I’ve done it! I went through and reimplemented everything I have done so far to target WebAssembly using SIMD instructions. Here’s the previous sphere’s shadow demo alongside the new WebAssembly version:
Click and drag (or touch and drag) to change the position of the light source, and thus change the shape of the sphere’s shadow. Change the resolution via the dropdown to observe the effect on render times.
I previously mentioned that my plan was to rewrite using AssemblyScript, which is a TypeScript based language that compiles directly to WebAssembly. This seemed really promising to me. My goal with this project was not to learn a new programming language but to learn about ray tracing and maybe do something fun with it.
Unfortunately, as I began to look more seriously into AssemblyScript I started to have my doubts about it. I briefly joined the AssemblyScript Discord server and it became apparent that established WebAssembly features like threads were not going to be implemented any time soon — seemingly because the authors have become disenchanted with one or more of the W3C working groups of which they were once a part of. In a long and somewhat inscrutable manifesto they list their objections, offenses, and demands. I can’t discount that they were mistreated but I nevertheless found it an off-putting read. They may well have some valid points — after all it is easy to be sympathetic about ensuring WebAssembly interoperates well with the web — but I can’t shake the feeling that maybe the WebAssembly standard will be better without their participation for a time.
Once I took AssemblyScript off the top of the list of possibilities I looked for alternatives, but ultimately Rust was the obvious choice. It’s a language I have wanted to learn anyway.
I have previously spent a little bit of time playing with Rust, but this was certainly a more thorough experience. It was helpful to understand that since I was building something that would solely target WebAssembly I could rely on wasm-pack to take care of a lot of the build details.
Getting tests working was also made pretty simple thanks to wasm-pack and
wasm-bindgen. The
standard way of writing tests in Rust is to put the tests in the same file as
the code under test, inside a tests
module. Here’s an example
from
ray.rs
:
#[cfg(test)]
mod tests {
use super::*;
use wasm_bindgen_test::*;
#[wasm_bindgen_test]
pub fn creating_and_querying_a_ray() {
let origin = Tuple::point(1.0, 2.0, 3.0);
let direction = Tuple::vector(4.0, 5.0, 6.0);
let r = Ray::new(origin, direction);
assert_eq!(r.origin, origin);
assert_eq!(r.direction, direction);
}
One downside to having all these tests in Rust means there is no obvious way to get them running on this site. As a result, I’ve taken down the test page that allowed you to run the JavaScript tests in your browser. I might explore that at some point, but its not a priority for me right now.
If you’re not familiar with “Single Instruction, Multiple Data,” it is a category of instruction sets CPUs implement to speed up calculations where you need to perform the same series of operations over different sets of variables. This comes in handy for things like matrix math. If you are old enough, you may even remember when SIMD instruction sets began to be added to processors: MMX on Intel and AltiVec for PowerPC. These days the common SIMD instruction sets are AVX on Intel or AMD processors and Neon on ARM processors.
Of course, WebAssembly is a virtual machine. You can’t just throw AVX and Neon
instructions in the WASM file. Instead, WebAssembly has defined SIMD
instructions that get compiled into the SIMD instructions for whatever
architecture the browser is running on. In Rust these are exposed as compiler
intrinsics in the
std::arch::wasm32
module.
WebAssembly’s SIMD instructions are all fixed-width, operating on 128-bit wide
operands. So for example there’s u8x16
instructions for addition,
subtraction, multiplication, division, comparisons, etc. that operate on 16
8-bit unsigned integers packed into 128-bit wide registers. Likewise, there
are instructions for i8x16
for signed 8-bit integers and
f32x4
for 32-bit floating point values.[1]
Here’s an
excerpt from matrix.rs
showing the multiplication operator implementation for Tuple
and
Matrix4
.
impl Mul<Tuple> for &Matrix4 {
type Output = Tuple;
fn mul(self, other: Tuple) -> Tuple {
let mut sum = f32x4_splat(0.0);
for i in 0..4 {
sum = f32x4_add(
sum,
f32x4_mul(
f32x4_splat(other.get(i)),
self.col_v128(i),
)
);
}
Tuple::from_v128(sum)
}
}
Now that I have implemented everything using explicit WebAssembly SIMD
intrinsics, I am wondering if I should have looked more closely at the
still-experimental
portable SIMD module. The
main benefit is that I could also target non-SIMD WebAssembly and compare the
performance. I could probably also more easily make use of WebAssembly’s
relaxed SIMD
which includes instructions like
f32x4_relaxed_madd
that would likely speed up matrix multiplication noticeably.
f32x4_relaxed_madd
…
…I’m curious how switching the above
Tuple * Matrix4
implementation to it would perform. So I spent
around two hours trying to get it to work, to no avail. I think the problem is
that walrus, which is used by
wasm-bindgen
, does not yet seem to support the relaxed SIMD
instructions. It throws this error when it hits the
f32x4_relaxed_madd
instruction:
Error: failed to deserialize wasm module
Caused by:
0: failed to parse code section
1: Unknown 0xfd subopcode: 0x105 (at offset 312376)
Maybe this should not be too much of a surprise, considering Relaxed SIMD doesn’t quite yet have wide support. It’s shipping in Chrome currently and behind a feature flag in Firefox. It looks like Safari may be getting it soon based on this WebKit commit from 2023, although it’s not a feature flag in Safari Technology Preview.
I’m actually not sure! There’s 8 or pixels that are transparent, so if you are viewing the site in dark mode they will look black, and if in light mode they will look white. I haven’t spent too much time trying to figure it out, but I must be doing something wrong when copying the image data from WASM world to JS world, or thereabouts.
This entry took a while to get completed thanks to various detours I took. The bulk of the Rust work was actually done pretty quickly. It maybe took a month or less. But I took a number of detours to work on this site, including adding support for serving videos and breaking apart the repository into three repositories. There’s now the penumbra repo for the core library, penumbra-www for this website, and touch-pad which is now available as an independent npm package.
Update 2024-11-20: The penumbra-www
repo has been promoted to
the main repo for all of limulus.net! It’s been renamed to
limulus-dot-net.
With so much time spent on the above detours I’m looking forward to finally starting the next chapter of the book, which is “Light and Shading”. I’ll be screen recording as I work on it, so I might also produce some kind of video, likely focused on whatever demo I create.
In addition to u8x16
, i8x16
,
f32x4
, and f64x2
there are also instructions
for u16x8
, i16x8
, u32x4
,
i32x4
, u64x2
, and i64x2
. With all
the ways you might want to slice 128-bit wide operands and all the
different operations you want to do for each way of slicing, this makes
for a lot of instructions!
↩︎
Above is a video I produced to demo GitHub Copilot to my coworkers. If you haven’t yet explored using a Large Language Model to you help you code, it is worth a watch. I screen-recorded myself developing an optimization for Penumbra (this project) and edited it down to about 6 minutes.
I use Copilot in other ways not covered in the video. It’s clearly been trained on other Ray Tracer Challenge implementations so it very quickly autocompletes tests with all the exact values. This has saved me a bunch of mindless typing. It also autocompletes production code that satisfies the tests, which is often less helpful for this project since I usually want to spend some time thinking about how to implement these things. But sometimes I turn it back on to get suggestions that prompt me to consider a different and potentially better solution.
If you’ve read my previous posts you’ll notice that I’ve switched to Rust targeting WebAssembly. There’s a story behind that! But it will have to wait for a future post.
Making this video was a lot of fun! A little less fun was navigating how to host video on this site without introducing a dependency on a third-party. There’s a good reason why just about everyone uploads to YouTube — doing this reasonably well is not easy. This may wind up needing to be a journal entry or even video of its own.
23.3.2024 22:15How I Am Using GitHub CopilotIn chapter 5 of The Ray Tracer Challenge you finally get to implement something that starts to resemble a ray tracer. You implement ray, sphere, and intersection related functions and the exercise at the end ties it all together to create an image.
The book does not go into the details of the math for how to determine the intersection points of a ray and sphere. I’m glad for that, but it bugged me that I do not have an intuitive understanding of why this intersect method works.
class Sphere {
intersect(ray: Ray): IntersectionCollection {
// Transform the ray into object space
ray = ray.transform(this.transformInverse)
// Vector from sphere origin to the ray origin
const sphereToRayVec = ray.origin.sub(origin)
// Supporting characters to determine the discriminant and intersection
const a = ray.direction.dot(ray.direction)
const b = 2 * ray.direction.dot(sphereToRayVec)
const c = sphereToRayVec.dot(sphereToRayVec) - 1
// Discriminant does not intersect sphere if it is negative
const discriminant = b ** 2 - 4 * a * c
if (discriminant < 0) return new IntersectionCollection()
// Calculate the intersection points
const sqrtDiscriminant = Math.sqrt(discriminant)
const t1 = (-b - sqrtDiscriminant) / (2 * a)
const t2 = (-b + sqrtDiscriminant) / (2 * a)
return new IntersectionCollection(
new Intersection(t1, this),
new Intersection(t2, this)
)
}
}
The book does suggest some online resources for an explanation of the math at
work. I took some time to read through
this one. It includes two solutions: a geometric solution and an analytic solution.
The geometric solution made sense to me but the analytic solution less so.
Still — despite an error[1]
in that explanation — it did make some sense. One thing that helped was
realizing that the discriminant
being negative means there is no
intersection because that would require taking the square root of a negative
number.
The solution the book provides and I implemented above is the analytic solution. I would have a deeper understanding of it if it were the geometric solution, but at least I now have a better-than-tenuous idea of why this code works.
<sphere-shadow>
The exercise at the end of the chapter is to render the shadow of a sphere by casting rays from a light source onto a “wall”. I’ve implemented that here, with the addition that you can change the position of the light source by dragging from the element.
I don’t normally test drive my demo code since it is more exploratory fun than writing code I intend to reuse. But I figured I would want to make use of the dragging interaction again, so I test-drove the creation of a TouchPad class to track all the mouse and touch events on (and off) of the element and emit only the needed move events. At some point I should add keyboard support to it as well.
Other than that there is not much new from the web technology side compared to the previous demo. Rendering of the canvas still happens in a single web worker.
In the demo I included an output to show the render time of the last frame. I get around 5.5ms in Chrome on my Mac Studio with an M2 Max. Firefox gets around 18.5ms and Safari around 9ms. I find this performance a little disappointing considering I feel like I have optimized things as much as I reasonably can. It makes me wonder if I should skip to targeting Web Assembly earlier than I was planning. I would like to keep the demos interactive in a real-time sort of way. Parallelization will help, but only so much on older devices with fewer CPU cores. Maybe now is the time…
At the time of writing this the issue with that page is that in the “Analytic Solution” section “equation 5” is a repeat of “equation 4”. It should actually be the quadratic formula:
Or, with the discriminant represented as :
If you decide to dig this deep hopefully the above can save you the intense head scratching that I went through. ↩︎
In the previous post I went a little further than the exercise at the end of the chapter asked for and created a web component that exercised my tuple implementation and included animation of the projectile. As it turns out, this wound up being very similar to the exercise at the end of chapter 2, which is about implementing a canvas. So I decided to skip that exercise and continue onto the next two chapters which walk you through implementing various matrix math operations.
Implementing a canvas class when targeting a web runtime is perhaps
unnecessary since
<canvas>
provides a solid 2D canvas JavaScript API. However, the book has you implement
color functions for tuples containing floats so a canvas that stores colors
with floats instead of integer values seems like it might be a better bet
going forward. So I decided to implement my own
Canvas
class
backed by a Float32Array
.
I went with 32-bit floats over the JavaScript-native 64-bit floats since I am still planning on porting this to AssemblyScript to take advantage of the v128 SIMD operations in WebAssembly. The “128” in “v128” implies you can either have an SIMD instruction operate either on 4 32-bit floats or 2 64-bit floats. Four-times-faster is better than two-times-faster. And based on a bit of research the extra precision is usually not needed or at least easy to avoid needing.
Using a
TypedArray
also opens the doors to backing a canvas with a
SharedArrayBuffer
or something similar. I can imagine this being useful by having the ray tracer
running in many WebWorkers, all updating a shared canvas.
There’s a bit of a snag with SharedArrayBuffer
however…
In response to the
Spectre vulnerability, browser vendors updated the SharedArrayBuffer
constructor to
throw so that it could not be abused until they had a fix. The fix they
ultimately adopted requires sending
two HTTP headers
with your HTML document. Well, you can’t set HTTP headers on GitHub Pages,
where I was previously hosting this site.
I always planned on moving this site to my personal website, limulus.net. But my setup for limulus.net is very out-of-date. I have a GitHub repository for it but deployment is no longer automated. I just manually upload any changes to S3. None of the other infrastructure for it like the CloudFront distribution has been turned into CloudFormation templates so it’s all just sitting in AWS resources without any version control. I wanted to avoid adding to that mess for now by publishing to GitHub Pages.
Even though I know SharedArrayBuffer
may not be how I ultimately
choose to implement things I also didn’t want to be in the situation where I
am forced into switching away from GitHub Pages in the middle of the project
instead of early on. In the (ok, unlikely) event that anyone was subscribed to
the RSS feed setting up redirects
on GitHub Pages for that might be tricky. Better to just get it out of the way
as soon as possible.
In a bid to get things done though I resisted the urge to write CloudFormation
templates for everything, so unfortunately I have added to my AWS technical
debt. However I did spend the time set things up in the new ways AWS
recommends: I’m using
GitHub’s OIDC provider
to get temporary AWS credentials for the GitHub Action that publishes this
site and I avoided setting up the S3 bucket to use public website mode. I
learned that to get CloudFront to serve index.html
files for
directories served from a private S3 origin you need to write a
CloudFront Function
to rewrite the request. So unfortunately that means I now have a tiny bit of
code for hosting this site that is not version controlled. But at least now I
know how to set these things up in a more secure way.
There’s only a few things particularly interesting about the implementation of
my
Matrix
class.
It likely should not have come as a surprise that the
Tuple
implementation would need to be treated as a matrix when
doing matrix math operations. In fact, tuples need to be treated as matrixes
of four rows, which is not my intuition about how to conceptualize an array of
four items. It was seeming like I was going to need special handling in my
Matrix
class to account for whenever it was passed a
Tuple
and that felt messy. The solution I landed on was to create
a TwoDimensionalArray
class that would act as the base class for
both the Tuple
and Matrix
classes. This way, the
Tuple class can construct itself with 1 column and 4 rows and the
Matrix
class doesn’t have to treat Tuple
s as special
cases.
This kind of refactor is definitely where having a robust test suite (in this case provided by the book) shines. I had confidence in a relatively substantial refactor without any added effort.
While I won’t pretend to understand the reasons why (maybe I knew back when I took linear algebra?) if you want a transformation matrix to have multiple transformations you have to multiply them in reverse order. In other words, if you want a matrix that you can use to do a translation, then a rotation, then a scaling up, you need to first multiply the scaling matrix by the rotation matrix, and then the result by the translation matrix. The books suggests that you implement a “fluent” API of chainable methods that takes care of this for you. For example:
const twoOClock = Matrix.transformation()
.translate(0, 1, 0)
.rotateZ(-(2 / 12) * 2 * Math.PI)
.scale(clockRadius, -clockRadius, 0)
What I’ve seen frequently in JS APIs is that they often require something like
a final .done()
method to perform the final calculations and
produce the end result of the chain of operations. However, there is a way
around this if you can structure your class to have the first attempt to read
the values of the returned object do the finalization.
Here’s how that works with my Matrix
class. The following is a
selection of methods that demonstrate it. The .translate()
,
.rotateZ()
, and scale()
methods in the example above
all call .#pushOperation()
to push their operation onto the
#operationStack
array and return this
.
export class Matrix extends TwoDimensionalArray {
static transformation() {
const chainable = Matrix.identity(4)
chainable.#operationStack = []
return chainable
}
#operationStack?: Matrix[]
protected override get values() {
if (this.#operationStack) {
const operationStack = this.#operationStack
this.#operationStack = undefined
const result = operationStack.reduceRight(
(result, operation) => result.mul(operation),
this
)
super.values = result.values
}
return super.values
}
#pushOperation(operation: Matrix): this {
if (!this.#operationStack) {
throw new Error('Attempted to push operation to non-chainable matrix')
}
this.#operationStack.push(operation)
return this
}
}
<pixel-clock>
The end-of-chapter exercise for chapter 5 is to use your matrix and canvas implementation to color in a pixel for every hour of a 12 hour analog clock. I did two things I really didn’t have to for this exercise: add animated “hands” and perform the rendering in a Web Worker.
Now that I have some hands-on experience with Web Workers I expect to be able to offload the work of the ray tracer off the main thread, and possibly even parallelize the work into multiple workers.
Now that these fundamentals are out of the way and I’ve got this site hosted where I want it, there should be less of a delay until the next post. With any luck the next post will also not be quite as long!
9.12.2023 01:30Chapters 2–4: Canvas and MatricesThe first chapter of the book focuses on setting up a foundational “tuple” library for operations on vectors and points. I didn’t really expect there to be much to show for this other than working tests. But the chapter ended with a suggestion for creating a small program to experiment with a mini physics simulation that fires projectiles at various angles and velocities and have their trajectories effected by gravity and wind.
I decided to take this a step further and create a web component that would animate the projectile on a canvas. Here it is in action:
I attempted to represent the axis by increasing/decreasing the size of the projectile. Something seems off with that though.
GitHub Copilot was helpful when writing the tuple tests. I could take the Gherkin test from the book, put it in a comment, and Copilot would generate the test code using the Gherkin-inspired test suite functions I wrote.
One thing that pained me as I wrote the implementation of the Tuple methods was knowing how inefficient they will be running in a JavaScript runtime. Vector math is the usual use case for [SIMD] instructions, but presumably JavaScript engines are not detecting that these operations could be compiled to SIMD instructions. I did a bit of forward research though and discovered that WebAssembly has SIMD support! At some point I plan to look at reimplementing Penumbra in AssemblyScript — but I first want a plain JavaScript baseline to compare against.
Onward to chapter 2!
19.11.2023 02:51Chapter 1: TuplesThe site is getting published. The CSS needs work. Now it’s time to try and get an in-browser test runner working. For this I have two goals:
Initially I was thinking that I would use the Gherkin tests directly from the book — by using some existing tool to parse them and provide hooks for wiring up the steps. But I’m not really finding anything like that out there. I found this approach which defines functions for each of the Gherkin prefixes, maps them to Mocha functions, which seems like a better approach.
Mocha also seems like a good choice for the test runner. And after a decent amount of trial and error I got it working how I want. I even got Eleventy’s dev server to reload the page when the code changes, which will be nice for development.
Remaining yak shaving tasks:
But at this point I would rather get started on the first test. This stuff has taken up way too much time already.
11.11.2023 22:20One More Yak to ShaveThe way The Ray Tracer Challenge starts is refreshing. The “Getting Started” section doesn’t spend time on any development environment setup. Instead it is a simple introduction to the Gherkin syntax then a notes on some typical pitfalls. Then the first chapter gives a brief introduction to points and vectors and throws its first test at you. It doesn’t even remind you that you need to choose a language.
I thought about what I wanted to use for my implementation. I could have chosen this as a way to introduce myself to a new language, but I decided to initially go with something I was familiar with: TypeScript compiled to JavaScript running in the browser. This way I could focus on the ray tracer itself and demos would be easy to share. After some research I also concluded that there is runway for performance enhancements like WebAssembly, WebGL and WebGPU.
Unfortunately this raises lots of questions that all have to get addressed before I even get started!
For the name after some poking around on Wikipedia I settled on “Penumbra”. There’s probably some better names I could have chosen, but I couldn’t find any other ray tracers with that name already.
Deciding on where to publish the site was pretty straightforward. My personal site would make sense, but it’s all tied up with its own repository. I also don’t have any static site generator for it. My professional site does have a static site generator (Hugo) but I couldn’t convince myself it was an appropriate place for this project. So I decided to create a new repository for the project and use GitHub Pages to publish it.
My past experience with Hugo was alright, but Go template syntax kinda irks
me. So I decided to give Jekyll a try,
seeing as how it is the default for GitHub Pages. This was the first real
nerd snipe.[1]
Jekyll is written in Ruby, and I figured I could just install the gem in my
project. Unfortunately by default Ruby’s bundle
tool does not by
default install gems in the project directory like I expect a package manager
to work in this decade. It’s possible to get it to work like this, but after
frustration with current documentation on how to do it not matching the
version of Ruby that macOS ships with I decided to try to do my development in
a devcontainer. I got that working but wasn’t really happy with having to run
Docker locally just for this project.
After going back to the drawing board I started researching other static site generators. That’s when I came across Eleventy, which is a fast Node.js based site generator. I actually had known about it for a number of months, but had completely forgotten about it. So now that is what I am setting up and is what this site is using.
Now of course I am getting nerd sniped trying to figure out how to use Eleventy and general web design things:
I’ve managed to work through these now, but I’ve run into issues with
Eleventy’s dev server not updating when it should and mysterious issues with
the webc:keep
attribute in bundling mode. This has me wondering
if my plan for using Eleventy’s dev server for development will work out. But
that will have to be the next entry…
Because I have a an overly complicated project creation utility for
personal TypeScript projects I always wind up starting a project by
refreshing the dependencies for that utility. This time that led me to
discovering that node-git has
stagnated and is no longer providing up-to-date pre-compiled binaries
for the latest versions of Node. This results in 3-6 minutes of compile
time when installing node-git. Yikes! That’s a lot for a CLI utility
that is supposed to be run via npm create
. So I decided to
spend the time to switch it over to
simple-git which is
pure JavaScript and doesn’t require any compilation. Thankfully I had
written tests for the git functionality which did not mock out
node-git and so swapping out the
library was straightforward.
↩︎