Hiker, weight lifter, archer, board gamer and cook. Sometimes a programmer, too. Probably walking my dog right now.
✍️Write rieview ✍️Rezension schreiben 🏷️Get Badge! 🏷️Abzeichen holen! ⚙️Edit entry ⚙️Eintrag bearbeiten 📰News 📰Neuigkeiten
Tags: probably programmer sometimes
Extremely excited to check out the new Stacked Pull Requests feature of Tower 12! I have been manually managing a lot of stacked branches lately, and the ability to keep them updated automatically is going to save me so much time 🙌 😭 www.git-tower.com/features/…
4.9.2024 17:32Extremely excited to check out the new Stacked Pull Requests feature of Tower 12! I have been manually managing a lot of stacked branches...Not a bad grouping for my first round of the season! #Archery
Does your engineering team have a person or group of people that are responsible for the quality of the codebase itself – the build tools, tests, infrastructure, sharing patterns, etc? If so, what do you refer to those folks as?
25.6.2024 20:51Does your engineering team have a person or group of people that are responsible for the quality of the codebase itself – the build...True love is when your wife gets the terrible version of “Stand By Your Man” from GoldenEye stuck in your head but you forgive them anyway www.youtube.com/watch
8.5.2024 14:39True love is when your wife gets the terrible version of “Stand By Your Man” from GoldenEye stuck in your head but you forgive...The only good push notification #BirdBuddy
If Apple was smart, they’d bring Balatro to iOS through Apple Arcade. This game has a ton of potential on mobile, and Apple has a chance to grab a winner before it launches
22.3.2024 18:23If Apple was smart, they’d bring Balatro to iOS through Apple Arcade. This game has a ton of potential on mobile, and Apple has a chance...I recently needed to write a custom lint rule for a project that uses Vitest to run its tests. ESLint provides great tools for testing custom rules through the RuleTester
class, but using it directly would mean that this project needed two different test runners to be run all of the tests. This got me thinking: is there a way to run the tests for the lint rule using Vitest?
It turns out, there is! RuleTester
is cleverly designed for exactly this purpose. There are three static methods that can be overwritten on the RuleTester
class to allow it to integrate with any test runner that you want. In a test helper, I defined a new class like this:
import { describe, it } from "vitest";
import { RuleTester } from "eslint";
export class VitestRuleTester extends RuleTester {
static describe(message, callback) {
describe(message, callback);
}
static it(message, callback) {
it(message, callback);
}
static itOnly(message, callback) {
it.only(message, callback);
}
}
Now, after generating a lint rule using the typical Yeoman generator, you can replace the import of RuleTester
from the eslint
module with an import of this subclass that we’ve defined. Voilà! Your ESLint rule tests are now running with Vitest.
For a deeper example of testing an ESLint rule with Vitest, you can check out my example project here.
29.1.2023 00:00Testing ESLint Rules with VitestI have been working on a full-stack application in SvelteKit recently. As the complexity of the application grew, it started getting harder to understand what was happening during each page render. I knew I needed something to help track down what was happening in my application during each request. The solution to my problem was a familiar one: request IDs!
A request ID allows you to have a single, unique value that can be used as part of each log entry that you create during the lifetime of a request. This allows you to find all of the log entries from a given request by searching for entries that contain the request ID. We will also attach the request ID as a header on the response from the server, so that we can access it from the browser when one of our requests fails; this can really aid in debugging problems that your users run into.
Setting up our application this way gives us 3 things we need to do:
Let’s take a look at how we can do each of these things!
There can be a deceptive amount of complexity around creating a string that we can trust to be unique. Since this guarantee is very important to us, it’s a good idea to make use of a shared, trusted implementation for this behavior.
The uuid
package on npm is often used for this purpose, but as of version 16.7.0, Node.js can actually do this for us! Since version 16.15.0 is now the LTS release (meaning it’s the recommended version for most users) we can safely choose to use the language’s tools rather than an external package.
Creating a unique identifier in Node.js looks like this; we can import the randomUUID
function from the node:
-scoped crypto
module and call it to create a guaranteed-unique identifier for us to use.
import { randomUUID } from "node:crypto";
const id: string = randomUUID();
This is a code snippet that we will come back to in the next section!
Now that we know how we are going to generate our identifier, we need a way to make it available to our request handler. While we could generate it within each of our SvelteKit endpoints, this ends up being a lot of repeated boilerplate code that it would be nice to avoid. Thankfully, SvelteKit has a mechanism called “locals” that serve exactly this purpose! “Locals” allow us to define additional properties that are attached to the event
object that each SvelteKit endpoint receives.
The first step, if you’re using TypeScript, is to tell SvelteKit the type of your new “local”. Skip to the definition for handle
below if you’re using JavaScript; otherwise, open up the src/app.d.ts
file, which should look something like this:
/// <reference types="@sveltejs/kit" />
// See https://kit.svelte.dev/docs/types#app
// for information about these interfaces
declare namespace App {
// interface Locals {}
// interface Platform {}
// interface Session {}
// interface Stuff {}
}
This file contains a few different types that, if defined, will help power autocomplete and type-checking for different SvelteKit APIs that you can define for your application. In this case, we want to define the Locals
interface to include our request ID by updating the file like so:
/// <reference types="@sveltejs/kit" />
// See https://kit.svelte.dev/docs/types#app
// for information about these interfaces
declare namespace App {
interface Locals {
/**
* The unique identifier for this request
*/
requestId: string;
}
// interface Platform {}
// interface Session {}
// interface Stuff {}
}
Now that we have the type definition in place, we’re ready to actually define our request ID “local”! We can do this by using the handle
hook, which allows us to define logic that runs before or after SvelteKit creates the response to a request. Right now we will use it to define our “local” on the event
object before SvelteKit creates the response for the request:
// src/hooks.ts
import type { Handle } from "@sveltejs/kit";
import { randomUUID } from "node:crypto";
export const handle: Handle = async ({ event, resolve }) => {
event.locals.requestId = randomUUID();
const response = await resolve(event);
return response;
};
With that in place, every SvelteKit endpoint that you define can access locals.requestId
to retreive our unique identifier!
The last requirements that we defined earlier was to supply the request ID as a header on the response. Thankfully, our handle
hook can help us here too! Since it receives the response from SvelteKit before it is delivered to the browser, we have an opportunity to modify it before it is sent.
// src/hooks.ts
import type { Handle } from "@sveltejs/kit";
import { randomUUID } from "node:crypto";
export const handle: Handle = async ({ event, resolve }) => {
event.locals.requestId = randomUUID();
const response = await resolve(event);
response.headers.set("x-request-id", event.locals.requestId);
return response;
};
Here we have followed the convention of calling the header x-request-id
, but you can choose any name that makes sense to you!
I hope this post has been useful for learning about request IDs and how you can create them in SvelteKit. If you want to view the source code for a working example app that uses this pattern, you can check out that out here!
30.5.2022 00:00Setting a Request ID in SvelteKitI recently worked on a Node.js project in TypeScript that made use of my usual suite of tools:
ava
with ts-node
for testingeslint
for lintingtsc
to compile my TypeScript files into JavaScriptThis all worked great when there was little-to-no tsconfig.json
customization present, but I ran into a situation that caused me some trouble.
A third-party package with an npm
scope (meaning the name looks something like @organization/package-name
) did not come with type definitions, nor were they available from Definitely Typed. I could write a local type definition by extending typeRoots
in the TypeScript configuration file, but this didn’t work well for ts-node
which, by default, ignores that property. I tried to configure paths
instead but could not get that working correctly with the scoped package name. After a lot of back-and-forth over the configuration possibilities, I almost gave up and just avoided trying to add types for this package altogether!
While reflecting on how nice and easy the “just install a @types/
package” approach to third-party type definitions is, it occurred to me that I could probably write my own @types/
package for it within the repo and have my package manager actually install it into node_modules
. This would satisfy all of the tools and avoid needing any custom tsconfig.json
magic; for all intents and purposes, it would be a “normal” @types/
package that just so happened to come from inside the repo instead!
I was able to achieve this by first creating a package within the repo for the type definitions. Note that this does not need to be a workspace package; it’ll work just fine without that.
mkdir -p types/organization__package-name
echo '{ "name": "@types/organization__package-name" }' > package.json
touch types/organization__package-name/index.d.ts
Then, in the package.json
for your project, add the following:
{
"devDependencies": {
"@types/organization__package-name": "file:./types/organization__package-name"
}
}
Note that the naming here is important: for scoped npm
packages, the expectation for the corresponding @types
package (because it, itself, is within the @types
scope) is to remove the @
from the name of the scope and join the scope and package name with two underscores in a row. The path on your file system can really be anything; it’s the key in your devDependencies
that is actually important for TypeScript to locate the files automatically.
After installing your new dependencies with whichever package manager you prefer, you’re all set to fill out your index.d.ts
file with the types for your dependency!
I am a big fan of the GitHub Command Line tool, gh
. In particular, it’s a great way to list the pull-requests for a repository and then check one out locally for review.
By default, this workflow is a little tricky. When you list your PRs you get a list that is passed automatically through your $PAGER
program (probably less
). By default, regardless of how much content there is, you have to actively dismiss less
to go back to the command line.
Once you’ve pressed some key, you end up back at your command line… without the pull requests visible anymore!
The problem lies with the workflow between listing the pull requests and checking one out. Followed the previous steps, you’ve seen your list and dismissed it. Now, do you remember what the number for the PR you want to check out it? If you’re anything like me, you have probably already forgotten it! It would be great if we could keep the list visible in our shell history rather than having it disappear.
Thankfully, the gh
tool allows you to override your $PAGER
environment variable and use something else instead. If you configure less
with a few particular flags, you can avoid needing to interact with the keyboard to dismiss the list of pull requests and leave them visible in your command line history.
To configure an alternate pager, you can run this:
gh config set pager "less -FX"
With that in place, the list of pull requests no longer needs any kind of interaction to dismiss it.
Now it’s much easier to reference the list of pull requests when checking one out!
10.2.2021 00:00Print GitHub CLI Pull Requests Without PagingA recent project at work had me defining some shared button styles for us to use in conjunction with Tailwind CSS. The styling is much like you might expect; a base button
class with some specific “types” of buttons in different styles. For the purpose of illustration, imagine something like this:
.button {
color: black;
}
.button.type-plain {
color: blue;
}
To render a “plain” button, you use the classes together on an element:
<button class="button type-plain">Click me!</button>
While our design system system dictated that all “plain” buttons use blue text, the reality is that sometimes the buttons need another color. Since we use Tailwind CSS, it would be great if we could one of Tailwind’s text-color
helper functions to override the default and provide a custom color.
<button class="button type-plain text-red">Click me!</button>
However, this led to a problem of specificity; The text-red
selector has a specificity of 1 and the compound selector .button.type-plain
has a specificity of 2, so our button – which should be red – was actually blue!
The problem lies in the fact that we set color
directly in a compound selector, which will have a higher specificity than any of our utilities. What if we could avoid setting color
in the .button.type-plain
selector? If only .button
defines the color
property, then our utilities will be able to override it again1!
The fix I found is to use a CSS variable to define the color to apply, and only actually set the color
property from the .button
selector.
.button {
--button-text-color: black;
color: var(--button-text-color);
}
.button.type-plain {
--button-text-color: blue;
}
Now, .type-plain
will set the color when .button
is the class controlling the color. If a utility like text-red
is present, though, the color
will still be overwritten to our desired value!
This works as long as .text-red
is defined after .button
in your stylesheet. When two selectors on an element have the same specificity, the latter definition is applied. ↩︎
I recently ran into a bit of an odd situation regarding a problematic npm
dependency. Our app depended on an old version of d3
, which had a dependency on an old version of jsdom
, which itself depended on contextify
. contextify
is not supported on modern versions of Node and would fail to install. Upgrading d3
to a modern version without the dependency on jsdom
was too hard, but we needed some way to move forward.
As it turns out, jsdom
was only a dependency of d3
in order to support a Node environment, which was not necessary for our app’s use case. Could we replace the jsdom
entirely with some kind of “dummy” package, since we didn’t need a real, working version of jsdom
anyway?
I took to Twitter with the question, and Jan Buschtöns replied with a great suggestion:
As our application is already using yarn
workspaces, this worked great! We created a package in the monorepo called noop
with nothing but a package.json
like this:
{
"name": "noop",
"version": "1.0.0"
}
and then used yarn
resolutions to point jsdom
to that package. Our “root” package.json
got the following
{
"resolutions": {
"**/d3/jsdom": "file:./packages/noop"
}
}
which tells yarn
to replace the d3
dependency on jsdom
with our dummy package.
If you end up in a case like this yourself and don’t have a place to create your own dummy package, you could use something like the none
package instead for the same effect!.
Recently, Movable Ink, the company where I work, released our configuration for Tailwind as an open-source project. While it’s only being used internally, making it Open Source has been a motivating factor to keep the code clean and be thoughtful about how we’re maintaining it. Using GitHub Actions has been key in helping us achieve that goal. In this series of posts, I’ll be covering all the ways we’re putting GitHub Actions to work for us.
In this first post we’ll dive into the configuration for our Verify workflow, which runs our tests and makes sure that all of the code is formatted the right way.
The testing and linting jobs are almost identical, so we’ll only go in-depth into the test
job. Let’s break down the steps to see what’s going on. Below is the “full” definition for the test
job in our verify
workflow:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- uses: actions/setup-node@v1
with:
node-version: "12.x"
- name: Get yarn cache directory path
id: yarn-cache-dir-path
run: echo "::set-output name=dir::$(yarn cache dir)"
- uses: actions/cache@v1
id: yarn-cache
with:
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- run: yarn install
- run: yarn test
The first few lines are pretty typical for all GitHub Actions:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
These state that:
test
, as that’s the top-level key that the rest of the information is nested underThat last one might seem a little obvious, but as we’ll see in later posts within this series, you’ll sometimes want some slightly different behavior!
The next few step gives us a Node environment with yarn
installed automatically, which is great for our project that uses yarn
.
- uses: actions/setup-node@v1
with:
node-version: "12.x"
The with
key is how we can provide input into an action. It can be thought of like providing arguments to a function call. For the actions/setup-node
action, we can provide a specific Node version we want to run against. While the action will work without a specific version, I prefer to provide that value to remove some guesswork about the environment we are running inside.
The next few steps came directly from the documentation for actions/cache
, the Action provided by GitHub for caching files between jobs. In the example below it is used to prime the environment with the yarn
cache from our last test run, so that we can avoid the time to download dependencies where possible. This step is entirely optional, but in my experience has shaved at least 30 seconds off the time to run this job, which in my opinion is worth the few extra lines of configuration!
Since they are a little hard to read, let’s break down exactly what’s happening here:
- name: Get yarn cache directory path
id: yarn-cache-dir-path
run: echo "::set-output name=dir::$(yarn cache dir)"
- uses: actions/cache@v1
id: yarn-cache
with:
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
The first step sets up a variable that we’ll use in the second step through the output of the step. Actions can have an output that can be referenced later on in your configuration file. Note the id
on that step; it’ll be important later on!
Let’s dive into the syntax of the command being run here:
echo "::set-output name=dir::$(yarn cache dir)"
We start off by using echo
to print something to STDOUT
. GitHub Actions looks for this specific ::set-output
syntax to find the output from your actions. This whole mechanism is pretty clever, in my opinion, because it means that anything can set output from an action to pass along for later use; all it needs to do is print that line to the console.
name=dir
specifies how we’ll reference the output. An Action can have as many different outputs as it would like, so they must be named. In this case, we’re naming it dir
. The ::
is part of the Actions syntax, and is used as a separator between the name of the output and the value.
The next bit here is a bit of bash
-foo: $(yarn cache dir)
says to run the yarn cache dir
command and interpolate the result into the string that it’s found within. The result here is an Action output called dir
whose value is the result of yarn cache dir
, the location that yarn
is configured to cache anything it has downloaded.
All of that gets us through just the first of the two yarn
-caching steps, but the latter is somewhat easier to digest. Here we’re using actions/cache
to restore the yarn
cache between test runs.
- uses: actions/cache@v1
id: yarn-cache
with:
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
The with
key here is how we pass input to a GitHub Action. The actions/cache
action takes three inputs that we care about for our usage case.
path
: The location on disk that we want to cache. Here we’re using the fact that we can reference the output from previous steps in the configuration of future ones. The ${{ }}
syntax is how we tell GitHub Actions that we want to grab a dynamic value that the Actions environment provides. The steps.yarn-cache-dir-path.outputs
bit is how we reference a specific previous step (note that id
that we step up previously and the way it appears in the reference for the output). We lastly provide dir
, the specific name of the output from our previous step.key
: The key to match on when we’re restoring our dependency cache. Here we’re dynamically building the key based on a few dynamic values. For one, the operating system that we’re running in, since the dependencies might install differently on different OSes. Secondly, a hash of the yarn.lock
file, since a yarn.lock
describes the specific set of dependencies that we’ll need. By using a hash of the yarn.lock
in the cache key, we can make use of a cache created by a previous job as long as it has not installed or removed any dependencies, which is the behavior that we want!restore-keys
: actions/cache
allows us to provide “partial” keys to be used if we don’t have an exact cache key “hit”. Since the dependencies are likely similar, even if the yarn.lock
hash has changed, we are telling GitHub Actions to restore from another cache that matches the prefix ${{ runner.os }}-yarn.lock-
in case of a cache “miss”. That will serve as a decent starting point for our dependency installation, rather than starting from a completely empty cache. When GitHub Actions uploads a new cache later on, though, it will store it with the full key that was provided by the key
input.The actions/cache
documentation does a great job of giving a deeper description if you want more information.
There is one last step after the cache configuration that’s important:
- run: yarn install
We need to make sure we actually run the yarn install
! This takes the files out of the cache and places them into the correct location in your filesystem, as well as downloading any additional dependencies that were added since the cache was created.
The last step is actually what we want to run in the first place!
- run: yarn test
With our environment ready for us, we can run our test suite. For this library in particular, that means running ava
, a simple-to-use test runner for Node projects. The actual tools matter little, as long as you’re writing tests somehow!
While our previous example had us writing a lot of the logic by hand, there are also pre-built actions that bundle up some helpful behavior for us. One great example of this is Percy, which provides a GitHub Action for creating a visual diff test for each of your Storybook stories. You can find that action here.
The definition of the job is identical to our test
job, except that instead of running yarn test
, we use the Percy-provided action like so:
- name: Percy Test
uses: percy/storybook-action@v0.1.1
with:
storybook-flags: "-s dist"
env:
PERCY_TOKEN: ${{ secrets.PERCY_TOKEN }}
This covers the testing configuration for our project. Keep an eye out for future posts on changelog generation, file size reports and deployment!
15.2.2020 08:07Verifying ChangesWhile writing some acceptance tests recently I kept running into slight race conditions between the state my application and an assertion I wanted to make. For example, one of the tests looked something like this:
test("creating a comment", async function (assert) {
assert.equal(Task.comments.messages.length, 5, "Starts with correct number of comments");
await Task.comments.input.fillIn("A new comment");
await Task.comments.send();
assert.equal(Task.comments.message.length, 6, "Adds a new comment");
});
How does the test know that the right number of messages should be visible at the point that send()
resolves?
Thanks to the smart folks that create the test utilities we have available in Ember, the answer is ✨magic ✨ (sort of). The work to render the new message is scheduled into the Run Loop, and send()
resolves once the Run Loop is done with any pending work. You often don’t even need to thinking about the fact that there is probably some time between when the message is created and when it appears on the screen.
This, however, wasn’t always working for me. Specifically, it worked locally but often broke when running the tests in CI – the page would not have the new message visible at the point that we tried to check the updated count. How can we make the test more resilient to this kind of failure?
Ember ships with a useful helper function called waitUntil
. You can give it a function, and it will create a Promise
that resolves once your function returns true
. We can use it to make sure that the new message is visible before our assertion is run to make the test a little more reliable.
import { waitUntil } from "@ember/test-helpers";
test("creating a comment", async function (assert) {
assert.equal(Task.comments.messages.length, 5, "Starts with correct number of comments");
await Task.comments.input.fillIn("A new comment");
await Task.comments.send();
await waitUntil(() => Task.comments.length === 6);
assert.equal(Task.comments.message.length, 6, "Adds a new comment");
});
If we never get to a point where 6 comments are visible, an error will be thrown by waitUntil
and our tests will fail.
Waiting on the condition and then asserting the same condition introduces some repetition that would be nice to avoid, however. How can we clean this up?
Based around the testing approach that The Frontside has talked about on their podcast (and use within their BigTest testing tools), I packaged the assertion and waiter into a single, custom QUnit assertion. It allows you to “converge” on a condition in your tests — it will continue to try your assertion until it is met and fail if the case is never met.
The above test can be revised using it like so:
test("creating a comment", async function (assert) {
assert.equal(Task.comments.messages.length, 5, "Starts with correct number of comments");
await Task.comments.input.fillIn("A new comment");
await Task.comments.send();
await assert.convergeOn(() => Task.comments.length === 6, "Adds a new comment");
});
The same effect is achieved, but without the duplication between the waiter and assertion.
If you want to leverage this pattern in your own tests, you can put the following in your tests/test-helper.js
file:
import QUnit from "qunit";
import { waitUntil } from "@ember/test-helpers";
QUnit.extend(QUnit.assert, {
async convergeOn(condition, message) {
try {
await waitUntil(condition);
this.pushResult({ result: true, message });
} catch (e) {
if (e.message === "waitUntil timed out") {
this.pushResult({ result: false, message });
} else {
throw e;
}
}
},
});
Hopefully this pattern helps you write clear, stable tests!
1.3.2019 08:07Converging on a Condition in QUnitLately I’ve been thinking a lot about “pull” and “push” with regard to the way functions interact with each other. Imagine two functions, a
and b
, where a
depends on receiving a value from b
. The value is pulled if a
determines when the value is delivered; it is pushed if b
determines the timing.
Combined with the ability to produce either one or more than one value, you get a total of four possible categories:
Function
allows you to pull a single value from itGenerator
allows you to pull any number of values from itPromise
pushes you a single value when it is ready???
pushes you any number of values when it is readyWhat fills in the ???
in the statement above? The answer is an Observable
. Let’s walk through how to use them by comparing their behavior to promises. If you aren’t comfortable with your knowledge of promises, take a moment to read through the MDN document on using them before reading more of this post.
Observable
?An Observable
can be used to represent a stream of values over time. Much like a promise, you don’t know when you will get a value. They can be used any time you want to represent a series of values from a given source. Some common use cases are:
Let’s dig into some details on what an Observable
is and how to use them.
Observable
?Much like a you call then
on a promise to receive a value from it, you can call subscribe
on an observable to begin receiving values
const subscription = observable.subscribe((value) => {
console.log(value);
});
The act of subscribing to the observable creates a subscription. The callback passed to subscribe
is called an observer, and can also take the form of an object. The following example behaves the exact same way as the one above:
const subscription = observable.subscribe({
next: (value) => {
console.log(value);
},
});
Unlike a promise, where your handler is called at most one time, the next
callback is invoked for each value that the observable produces.
Since we do not know how many values we will receive or when we will receive them, we may run into a case where we need to signal that we are no longer interested. The subscription allows us to unsubscribe
when we no longer want to receive values
const subscription = observable.subscribe((value) => {
console.log(value);
});
// Some time later...
subscription.unsubscribe();
Once you’ve called unsubscribe
, your handler function will no longer be run.
When dealing with a promise, you can react to an error occurring as well as a value being produced. Similarly, you can also react to errors from an observable.
const subscription = observable.subscribe({
next: value => {
console.log(value);
},
error: error => {
console.error(error);
});
While an observable can represent an infinite source of values, it is possible that no more will be produced. In that case, they can signal that they are “complete”.
const subscription = observable.subscribe({
next: (value) => {
console.log(value);
},
complete: () => {
console.log("Done producing values!");
},
});
There is more to know about observables, but this is enough to get started. Below are some resources for learning more
Observable
implementation and a ton of utilitiesComing soon from me: using observables in Ember.js!
27.1.2019 23:52Observables: A Brief IntroductionToday at EmberConf, Matthew Beale spoke about the new Module Unification directory layout that will be coming to Ember in the near future. If you want to try it out now, you can install the canary
version of the Ember CLI and generate a new application.
Thanks to npx
, you can do this with a single command:
MODULE_UNIFICATION=true npx ember-cli/ember-cli new __name_of_app__
This avoids needing to globally install the canary
version of the Ember CLI but still gives you access to the bleeding-edge features.
Protip: If you want to use yarn
, throw --yarn
on the end of that command.
If you have an Ember component that requires an Ember Data model as an attribute, you might want to use Mirage to generate the models in the right shape. Thankfully, you can access Ember Data in your test to generate the data, then pass that into the component to test it.
import { module, test } from 'qunit';
import { setupRenderingTest } from 'ember-qunit';
import hbs from 'htmlbars-inline-precompile';
import setupMirage from 'my-app/tests/helpers/setup-mirage';
import { find, render } from '@ember/test-helpers';
import { run } from '@ember/runloop';
module('Integration | Components | render-post', function(hooks) {
setupRenderingTest(hooks);
setupMirage(hooks);
hooks.beforeEach(function() {
this.store = this.owner.lookup('service:store');
});
test('it renders a blog post', async function(assert) {
const post = this.server.create('post', {
name: 'Generate Integration test data with Mirage'
});
await run(async () => {
this.set('post', await this.store.findRecord('post', post.id);
});
await render(hbs`{{render-post post}}`);
const title = await find('h1');
assert.equal(title.textContent, post.name, 'Rendered the title');
});
});
Note: For the setupMirage
definition, see my previous blog post about the new QUnit API.
I recently upgraded a large Ember app to the new API and ran into a few problems along the way. Here’s a few tips for making your transition smoother than mine was.
To start off, update to the latest ember-cli-qunit
yarn ember install ember-cli-qunit
Additionally, ember-test-helpers
can be removed from your dependencies if you have it listed there, since ember-cli-qunit
will bring in ember-qunit
, which in turn will bring in the new version of that package, @ember/test-helpers
.
yarn remove ember-test-helpers
Thankfully, there’s an excellent codemod that can look at your tests and convert them to the new syntax. It’s not the only thing that you’ll need to do, but it does get you pretty far.
You can find the repository here, but for a quick one-liner, you can run it like this:
npx jscodeshift -t https://rawgit.com/rwjblue/ember-qunit-codemod/master/ember-qunit-codemod.js ./tests/
The tests/helpers/start-app.js
and tests/helpers/destory-app.js
helpers are no longer used with the new testing API, and the means for creating your application instance has changed as well. If you did any setup of your test environment in start-app.js
, you should move that code to tests/test-helper.js
. Both of those files can be deleted.
Additionally, you need to call the new setApplication
function provided by @ember/test-helpers
in your tests/test-helper.js
file. Check out the ember-new-output
repo for an example of what the file should look like after the change.
Finally, you’ll need to ensure that your application doesn’t start immediately but instead boots when your tests say so. You can configure this in your config/environment.js
file like so:
"use strict";
module.exports = function (environment) {
// ...
if (environment === "test") {
// Ensure app doesn't automatically start
ENV.APP.autoboot = false;
}
return ENV;
};
ember-cli-page-objects
If you use use ember-cli-page-objects
, the latest beta release allows it to work with the new @ember/test-helpers
changes. This is necessary because the test helpers that used to be injected into the global scope are now imported explicitly. Upgrade to at least version 1.15.0.beta.1
and everything should “just work” (although you may start getting deprecation warnings about a change to the collections
API, as I did. I took this opportunity to fix those issues while I was updating everything else.
ember-cli-mirage
explicitTests in the new style won’t automatically start the Mirage server and set up the global server
reference (which is probably a good thing!). After updating to Mirage 0.4.2
or later, you explicitly import a helper and pass in the hooks
, much like the way you set up an Acceptance or Integration test:
import { module, test } from "qunit";
import { setupApplicationTest } from "ember-qunit";
import setupMirage from "ember-cli-mirage/test-support/setup-mirage";
import { currentRouteName, visit } from "@ember/test-helpers";
module("Acceptance | Projects | Show", function (hooks) {
setupApplicationTest(hooks);
setupMirage(hooks);
test("visiting a project", async function (assert) {
const project = this.server.create("project");
await visit(`/project/${project.id}`);
assert.equal(currentRouteName(), "project");
});
});
An added benefit is that setupMirage
works in any kind of test, not just Acceptance tests, making Mirage usage more consistent. For more information, check out the 0.4.2
release notes.
Here’s a few other things that, while not necessary, are good improvement to make to spruce up your tests
jQuery
in testsThe new @ember/test-helpers
provides a great set of jQuery
-less test helpers for interacting with the DOM. As Ember moves toward removing jQuery
as a dependency, you might want to migrate to these new helpers. Thankfully, there is a codemod that you can find here that transforms test code like this:
this.$(".foo").click();
Into code like this (which doesn’t require jQuery
)
import { click } from "@ember/test-helpers";
await click(".foo");
I hope this was useful guide. If you have any tips of your own or want suggestions on improvements, get in touch!
21.2.2018 10:10Upgrading an Ember app to the new QUnit APIChanges are you use some dependencies that have their source code hosted on Github. It’s useful to be able to check the differences between two commits to see what has changed, especially when determining what breaking changes there might be between two releases. git
of course has this functionality, but accessing it through the Github UI is much more convenient. I couldn’t find a nice way to access this feature, though, so I started to do a little digging.
It turns out that it’s actually really easy to create the URL for viewing the differences yourself. For any given project, you can go to the URL that looks like:
https://github.com/__NAMESPACE__/__PROJECT__/compare/__EARLIER_COMMIT__...__LATER_COMMIT__
to see all of the changes between those two commits. So for example, you could go here:
https://github.com/alexlafroscia/til-blog/compare/3c7ae8…99b062
to view the most recent change to this blog (at the time of writing).
This works with any commit identier, including branch names and tags (which is great for comparing releases). So, you could go here:
https://github.com/alexlafroscia/til-blog/compare/3c7ae8…master
to view all of the changes between the most recent commit and the current published version.
20.12.2017 08:07Checking Differences Between Commits in GithubI had to use a bit of a hack this week to ensure that a box always appeared at a 1:1 aspect ratio. Basically, by doing something like:
.box {
width: 100%;
padding-top: 100%;
}
You can force something to display with the same height and width, since the padding percentage is relative to the width.
However, this is not true of flex children, which this box happened to be. Chrome rendered just fine, but FireFox had different behavior, as documented here (the FireFox behavior might actually be more correct, I have no idea). Flex childrens’ percentage-padding is relative to the flex-parent.
19.7.2017 23:00Maintaining aspect ratio in CSS
This post was originally published on Medium. You can view that here.
These days, the experience of writing JavaScript is influenced as much by the tools used during development as those used at runtime. Projects exist that can drastically improve your code quality, from basic help like spotting typos to adding full-blown type checking to a dynamically typed language. As great as these tools are on their own, they’re made even more useful when they can be brought directly into your editor. This has given rise to tools like Visual Studio Code with native support for these features. But what is a Vim junky to do?
This post will cover my setup for bringing three core IDE features into Vim:
I personally use Neovim instead of “regular” Vim. If you’re using “regular” Vim, your mileage with these suggestions may vary as some of the plugin features may only be available in Neovim. I highly recommend checking it out if you haven’t already.
This post will mostly cover plugins for Vim; if you’re not familiar with the concept, this gist covers it really well. If you need a TL;DR I highly recommend vim-plug
, which is what I use.
In general, a linter is a tool that can look at your code and report potential errors without having to run the code. The most popular linter for JavaScript these days is by far ESLint; it has support for modern JS features (including JSX) and is easily extended with additional rules and features.
If you’re not working with ESLint already, getting it installed takes just a few steps (to be run from within an existing JavaScript project):
yarn add -D eslint
yarn eslint -- --init
# Or, if you're using npm
npm install -D eslint
./node_modules/.bin/eslint --init
Installing ESLint into a project through Yarn (or npm)This will install ESLint as a “development dependency” of your project. The initialization will ask how you want to set up your project. This will change based on the specific project you’re working on. If you’re not sure, I suggest trying out one of the popular suggested configurations.
There are many Vim plugins for running linters but the best experience I’ve had comes from using Ale. It has some really neat features that set it apart from other solutions, such as running linters asynchronously to avoid locking up the editor and checking your file as you type without needing to save.
With the plugin installed through your method of choice, you’re on your way to a great linting experience in Vim. It supports ESLint out of the box and should start working without any additional configuration. If you open a file in your JS project that has a linting error, you’ll end up with an experience like this:
Notice the annotations next to erroneous lines, the hint about errors on the current line at the bottom of the screen, and the total number of errors in the bottom-right-hand corner.
With powerful tools like ESLint available for checking code style, decisions around the right way to configure them often arise. Coding style is very personal and these discussions, as basic as they may seem, can cause undue tension between team members. This has given rise to tools like Prettier, which aim to reduce this friction by taking an extremely opinionated stance on code style. Ideally, a few keystrokes in your editor render your file perfectly formatted.
Since we’re already working with ESLint, which has its own methods for fixing code (including changes beyond what Prettier would make), we’re going to take a two-step approach to fixing code in Vim:
This will allow Vim to report errors from ESLint and Prettier, and fix both at the same time.
The first step is to get ESLint reporting about Prettier errors. There is a plugin/configuration pair provided by the Prettier project that allow us to do just that.
To install, run the following:
yarn add -D prettier eslint-plugin-prettier eslint-config-prettier
# Or, if you're using npm
npm install -D prettier eslint-plugin-prettier eslint-config-prettier
Then, update your ESLint configuration to look something like the following (it’s in the root of your project, in case you can’t find it):
{
“extends”: [
“eslint:recommended”,
“prettier”
],
“plugins”: [
“prettier”
],
“rules”: {
“prettier/prettier”: “error”
}
}
Now, running ESLint will report issues from ESLint and Prettier, and fixing ESLint errors will fix Prettier ones too.
The setup for running ESLint’s fixer from within Vim is actually pretty simple, thanks to the ale plugin that we installed earlier. Not only can it report errors, but it can run fixers too! Add the following to your Vim configuration:
let g:ale_fixers = {
\ ‘javascript’: [‘eslint’]
\ }
Now, running :ALEFix
while editing a JS file will run the fixer on the buffer’s content and write the corrected content back to the buffer. You should see all of the fixable errors automatically go away, leaving you to fix the rest yourself (or save the file and continue working).
If you want to make this a bit easier for yourself, I’d recommend adding a shortcut to run :ALEFix
. You can add something like the following to your Vim configuration file
nmap <leader>d <Plug>(ale_fix)
To let <leader>d
fix the current file. For me, that means a quick SPACE-d
before saving makes sure that everything looks good, but that will depend on what your leader key is.
The last piece to a modern JS environment is a good autocomplete experience. Vim comes with some basic functionality through omnicomplete
right out of the box, but with tools like TypeScript and Flow, we can get better integration than that.
My go-to plugin for a richer autocomplete experience is deoplete
. Deoplete provides an framework for providing autocomplete data to Vim. Some recommended companion plugins are:
deoplete-ternjs
– Autocomplete powered by Tern. Should work with most projects, but less powerful than Flow or TypeScriptdeoplete-flow
– Autocomplete powered by Flow (demonstrated below)nvim-typescript
– Provides Deoplete suggestions plus a bunch of other tools for TypeScript developmentWhile Vim is certainly usable without this kind of integration, it can be a huge help in preventing runtime errors.
I hope you’ve found these resources useful. For more information on my personal configuration, you can check out my dotfiles or chat with me on Twitter.
21.6.2017 08:07Writing JS in VimIf you end up in a situation where you want to grab an old commit (from some other branch, even) but don’t know the commit hash, you want to access the reflog
. It allows you to access old commits easily:
git reflog | head -200 | grep TMP
Will show info on all the commits within the last 200 that has a message containing TMP
. This is really useful if you’re using some temporary hack that you want to apply/remove repeatedly without keeping it in a branch.
I noticed that my $PATH
was being set differently between tmux
and a regular shell. Specifically, without tmux
my Ruby installation from asdf
would override the default one but in tmux
it would not.
Eventually, I was tipped off by this blog post that the issue might be my /etc/zprofile
file, and that was indeed the case; changing the code to this fixed it for me:
if [ -x /usr/libexec/path_helper ]; then
PATH=""
eval `/usr/libexec/path_helper -s`
fi
Now, the directories that I want on the front of $PATH
are consistently placed there.