Some are good, some are bad. One is bloody adorable (the fairy-wren, of course). Some are quick and dirty and some took too much time. Some prompts sucked and some were useful. In fact, if you want to draw spurious connections of your own, I’ve even put information on all of them into a handy CSV.

For instance, there seems to be an interesting correlation between orientation and background colour:

Quick explainer on some of the less usefully-named columns:

**Usefulness**: how well the prompt succeeded at prompting. If I was racking my brain throughout the day, it’s bad.**Presence**: the extent to which the thing described by the prompt exists in the picture to begin with. There’s no actual fairy in a fairy-wren, so it gets low presence.**Prominence**: how apparent the thing described by the prompt (even if tenuously à la “Fairy”) is in the image as a whole.**Joke**: is the essence of the image in itself a joke? For instance, a bat with a bat is a joke.**Time**: the editing time stored in the Krita file. This measures seconds, and I’m pretty sure it only counts time something is actually happening and not just having the file open. And also includes time up to actually saving something in the first place.

Of course these are all subjective measures. Most of them are.

Somewhat surprising I managed to find the time for all of this because I’m not so certain that’ll be the case next year. Even so I probably wouldn’t do the inking style in future – that’s not how I usually draw so it’d be nice to at least get practice doing the things I normally would instead. It was interesting that towards the beginning I’d sketch out the main features on a separate layer first and then do the thing proper, whereas after the first few days I’d use one layer from start to finish, sketching out lines directly in the ink-style brushes rather than a light pencil-type one.

In all it was interesting but I’ll be taking a bit of a break from drawing for a while (as if to imply I wouldn’t be doing other things regardless).

]]>So the challenge was on to lay out a decent-looking set of triangles that could fold into an icosahedron, even though they would obviously never ever be folded into an icosahedron. Priorities, you know; this stuff matters.

You’ve no doubt seen the usual arrangement, a line of ten triangles with ten more along the top and bottom. But it doesn’t look very interesting nor exciting.

This sort of spatial geometry is far too much for my poor brain to handle, especially without any physical icosahedra to admire, so the first attempt was neither particularly appealing nor, on closer inspection, did it form an actual icosahedron. Oops.

It’s so obviously wrong, right? I couldn’t tell without deleting pieces step-by-step from the same 3D model that was supposed to be its basis, demonstrating the clear flaws in this haphazard methodology. Perhaps marginally deterred, I scrolled slightly further down through image search results for “icosahedron net,” and here’s where things start getting interesting.

Part of Figure 14 here shows a remarkably spaced-out, non-branching net:

But the net from this page blew my mind:

It’s a… completely non-branching path. Is that actually doable? It’s remarkably similar to the original concept, almost as though the pointless icosahedron needn’t even be a detraction. Some more advanced analysis is clearly going to be needed.

If the goal is to find some unbranching path across faces of an icosahedron, well, that’s kinda like (read: “is”) a Hamiltonian path in a graph of icosahedron faces. And with that done, maybe there could even be a complete cycle?

The vertices (black dots) of this graph are actually connected in the same way as the vertices of a dodecahedron – you can see the pentagonal faces formed by them. In fact the graph itself is called the “Dodecahedral graph.” So why is it here for an icosahedron? The icosahedron and dodecahedron have a special status as “duals”: the faces of each correspond exactly to the vertices of the other, with the same number of edges on both. The same relation holds for the cube and octahedron, while the dual of a tetrahedron is another tetrahedron: four faces and four vertices, each connected to the three others. The end result is, a graph of icosahedron faces will be the vertices of its dual (a dodecahedron), by the definition of dual.

So there’s a graph! Let’s see if we can make our own unbranching path through it.

With a bit of effort, the net this is supposed to abstractly represent can be found. Mostly I try to keep track of where it turns “left” or “right,” which is a bit of a head-spinner but at least consistently doable.

And that’s an icosahedron? In retrospect it seems obvious – there’s a line of ten triangles with caps of five more on each side. I find it quite remarkable how much easier this is to work with from a graph.

And finally, a complete cycle can indeed be found, though for a net it must be severed at some arbitrary point.

And this does form a real icosahedron! I ruled it out on paper and taped the ends together just for confirmation. Apparently finding a Hamiltonian cycle on the dodecahedral graph specifically is called the “icosian game.” Never heard of it before, but it’s evidently not too difficult to pull off… David Darling notes “the game was a complete sales flop, mainly because it was too easy, even for children.”

Of note is that, as far as I can tell, all Hamiltonian cycles for the dodecahedral graph are isomorphic (or whichever term the graph nerds/pedants would prefer). So any continuous cycle will yield nets that look like the image above, with some segment cut from the left and moved to the right. I chose to split it somewhere that gives a nice rotational symmetry.

So who cares about wall ornaments, here are some takeaways that I *might* have pulled from my arse, so be cautious: anything involving designing nets (a regular concern for many) is far easier if you first construct a graph of face connections, *i.e.* the dual of the polyhedral graph. Any net will then correspond to a “spanning tree” of the graph, a subgraph that includes every vertex (here representing faces) with no cycles. I guess.

Just to wrap up, a net where each face is as close as possible to a central one, using the techniques developed above:

And one more thing… I noticed making these that there were always 10 triangles in one orientation and 10 in the reverse (for nets that are actually valid icosahedra, ahem…). But looking at the “standard” net, it seems trivial enough to move one or more of the top triangles into positions where they would be reversed, so the ratio would be unequal. I see such possibilities in other nets too. Is there some really funky graph theory that could indicate/predict the ratio? Evidently something to do with how far each vertex is from some arbitrary reference, since the triangles alternate orientation with each successive edge. What sort of graph would give the largest imbalance?

]]>That’s the story I’ve read, anyway, but the details aren’t important. What matters is that this format is a horribly inefficient way to store that kind of data. Computers and integers don’t care about base 10 and you shouldn’t warp stuff into it for no reason.

The most efficient format for this kind of data is in the vein of Unix timestamps – count the number of seconds or days or [desired resolution] since a particular time, and there’s your integer. It’s guaranteed to map every single integer in the range to a complete, unbroken interval of time.

Problem is it’s harder to process. Thanks to the complications of calendars and convoluted methods of timekeeping, extracting even just the year from “1640955600 seconds since 1970-01-01 00:00 UTC” is not straightforward. Fortunately people with far more time to worry about these intricacies than us have already made libraries to deal with all of it, so I must preface (midface?) all of this with “just use the standard libraries,” and ask that you just keep the principles in mind when considering squeezing a bunch of stuff into one integer.

Throwing all that aside, what if we had the worst of both worlds, not optimally compact but also requiring more processing to extract?

(Exaggerating for comedic effect, what follows is honestly no harder to work with than the woefully inefficient method)

Dates in the Gregorian calendar have years of 12 months with 28–31 days. News to anyone no doubt. The (or, “A”) problem with squeezing a bunch of numbers together so it reads like 220103 is it wastes a lot of potential values. While a year has at most 366 days, this format distinguishes between 10000 days for any year – 96% of the potential values are wasted.

That’s because it’s been arbitrarily smashed to work nicely in base 10: Year × 10000 + Month × 100 + Day. A far better approach is hopefully apparent: multiply each part by only what is needed to distinguish them. Year × 12 + Month will uniquely encode any combination of year and one of 12 months. Even if one of those months is encoded as “12,” there won’t be a 0 for it to conflate with and just needs a but more processing to extract than if it were 0–11.

The only reason this approach will differ from the counting method of Unix timestamps is because months have differing numbers of days. And, with further resolution, sometimes days have different numbers of seconds. But leaving room for any month to potentially have 31 days, an efficient integer has the following format:

(Year × 12 + Month) × 31 + Day = Year × 372 + Month × 31 + Day

To simplify decoding, ensure months are from 0 to 11 and days from 0 to 30, and the date is far more compact. This can be done by the encoding operation as Year × 372 + Month × 31 + Day − 32, as −32 removes both one day and one month.

Today, 2022-01-03, becomes 752186, or just 8186 if the year is given as 22 instead of 2022. From 220103 to 8186 – it’s an obvious improvement with no sophisticated processing. This way every year distinguishes between 372 different days, which is more than the actual average of 365.2425 but literal orders of magnitude better than 10000.

Now to extract the individual values again just requires some simple integer division and remainder arithmetic.

Year = floor( datestamp / 372)

Month = floor[ ( datestamp mod 372 ) / 31 ] + 1

Day = ( datestamp mod 31 ) + 1

This will work as is even with negative years, so long as your language of choice implements modulo “correctly”, i.e. with the same sign as the divisor. JavaScript gets this wrong.

Horizontal axis is the dividend. Red is the quotient and green is the modulo.

Images by Mathnerd314159 on Wikimedia Commons, based on work by Salix alba.

Working with days of the month is a bit odd. From here it’s straightforward, with days of exactly 24 hours from 0 to 23, and hours of 60 minutes from 0 to 59. Incorporating 60 seconds would be more of the same, but at that point you really should just be using normal timestamps anyway. More so than you already should be, that is.

Datetime = ( Year × 372 + Month × 31 + Day ) × 1440 + Hour × 60 + Minute

Equivalent to a final monstrosity,

Year × 535680 + Month × 44640 + Day × 1440 + Hour × 60 + Minute

Now the first minute of year “22” is merely 11784960, which is 182 times smaller than 2147483647. This method is so much better, that the year can literally be “2022” and still be going strong. In fact, this can encode every minute of every day of every year all the way up to and including year 4008. It gets all of year −4008, too.

The fundamental format of this encoding is no different than the atrocity that started all of this. It just doesn’t have an unwarranted devotion to base 10.

]]>Last year wasn’t great for my arbitrary “at least once a month” posting schedule, but was rather productive in terms of drawing. Below is a collection of most of those doodles and art-adjacent depictions, arranged roughly chronologically in English reading order.

Pithy closing remark.

]]>Previously I’ve shown how human cone cells certainly do not peak at red, green and blue frequencies. But those colours are only associated with the peaks; what would it mean to depict the entire range of responses as a single colour?

The approach used here is to treat the relative spectral sensitivities of each receptor as a visible spectral distribution. You could consider it a sum of all detected frequencies weighted by responsivity.

The result seems analogous to a redshift of sorts – it looks like L went from yellow-green to yellow; M went from blue-green to green; S went from purple to slightly-less-purple.

What is certainly true is that far less desaturation was needed to fit these within the sRGB gamut. In fact M, which had needed the most before, now needed none. I still desaturated everything uniformly though, with XYZ (0.05, 0.05, 0.05).

L | M | S | background | |

sRGB | 247, 245, 34 | 138, 249, 80 | 77, 35, 236 | 69, 62, 60 |

CSS value | `#f7f522` | `#8af950` | `#4d23ec` | `#453e3c` |

What does it mean in practice? Rather than representing cones with red, green and blue colours, you might consider something approximating the ones above. It doesn’t have to be exact, you could easily disregard the background, this is more for the purpose of illustration rather than reliably reproducing the resulting colours.

So a plot of cone fundamentals could be done as such:

Hopefully using a purple with more contrast.

The cone fundamentals for this exercise were the Stockman and Sharpe (2000) 2-degree fundamentals based on the Stiles & Burch (1959) 10-degree CMFs, sourced from http://www.cvrl.org/cones.htm

]]>Hypothesis: additive mixtures of lights average out the wavelengths to an intermediate colour.

Prediction: red and blue together will produce green.

Experimental: nope – it made purple.

Now, audience participation time – choose your own Conclusion!

- colours add by a mechanism somewhat more involved than their position along the visible spectrum.
- purple must be fake.

Humans see colour with

red,green, andbluecone cells, named for the colours they best detect.

Human cone peak sensitivities:

This is part of the reason why the labels L, M, and S (for “long”, “medium”, and “short”) are used exclusively in the literature: the traditional names for cone cells have almost no bearing on their actual receptivities.

The other part is to do with how each responds to a much broader range than a single peak, ranges that overlap so no receptor monopolises any of the interior region.

]]>Last year I was interested but didn’t have the time to dedicate; this year was going to be *different*. Then I almost forgot, was reminded about a week beforehand, forgot again and didn’t remember until the night of the 3rd of ~~Oc~~Inktober.

I aimed for something every few days, and it was going well until about the last week when life caught up (as it so annoyingly tends to) and I made a firm decision to focus entirely on other matters. (Life matters are also why there’s only been any mention here halfway into November.)

The point is, this is what there is to show for it all:

3rd – 6th 13th – 17th 7th – 10th 10th – 13th 18th – 24th

“Obnoxious watermark” because somebody insisted that one was too valuable to go on the internet. I kinda disagree but there’s an attempt to protect it anyway.

Takeaways from the experience:

- Focusing on texture alone can be quite fun or relaxing
- I can totally make arts if I put the time in. But I
*don’t wanna*…

Anyway it was nice to whittle down the “ideas to do” list by three items. That dragon one is good enough to deserve a re-do in colour, at some point.

Hrm, 83% of depicted animals (can you find the fifth/sixth?) have wings. I wonder if there’s a connection here…

]]>In terms of the engine and programming, the nature of this jump is determined by the values given to gravity and the initial impulse. But more practical is defining the height of the jump, and some quality that *could* be labelled the “floatiness coefficient”. Or it could just be the time to the top of the jump.

Standard advice is to experiment with some values until it feels good, but that’s lame. Let’s start from the basics, and quantify what impulse and gravity will yield the desired attributes instead. With calculus! You can gloss over the derivations for the resulting, calculusless formulae; I literally cannot stop you.

First the definitions:

- height above ground is given by
*x*. Maximum height attained is the constant parameter*h*. - time since the initial impulse is given by
*t*. The~~floatiness coefficient~~time of maximum height is the constant parameter*T*. - during the jump is only a constant downwards acceleration, the magnitude of which is the constant parameter
*g*. *v*is an alias for d*x*/d*t*.- the upwards velocity that causes an actual jump is the constant parameter
*u*.

The goal is to find *u*(*h*,*T*) and *g*(*h*,*T*).

Now, the maths. Acceleration, the sole thing governing motion here, has a few equivalent expressions:

\frac{\mathrm dv}{\mathrm dt} = \frac{\mathrm dv}{\mathrm dx}\,\frac{\mathrm dx}{\mathrm dt} = v \frac{\mathrm dv}{\mathrm dx} = -g

So:

\begin{aligned} \frac{\mathrm dv}{\mathrm dt} &= -g \\ -\mathrm dv &= g \,\mathrm dt \\ -\int_u^0 \mathrm dv &= \int_0^T g \,\mathrm dt \\ \left[v\right]_0^u &= \left[-gt\right]_0^T \\ u &= gT \end{aligned}

And:

\begin{aligned} v \frac{\mathrm dv}{\mathrm dx} &= -g \\ -v \,\mathrm dv &= g \,\mathrm dx \\ -\int_u^0v \,\mathrm dv &= \int_0^h g \,\mathrm dx \\ \left[\frac{v^2}{2}\right]_0^u &= \left[gx\right]_0^h \\ \frac{u^2}{2} &= gh \\ u^2 &= 2gh \end{aligned}

These two results together give the formula for *g*.

\begin{aligned} (gT)^2 &= 2gh \\ g^2T^2 &= 2gh \\ g &= \frac{2h}{T^2} \end{aligned}

And so, *u* in terms of *h* and *T*.

\begin{aligned} u &= gT \\ &= \left(\frac{2h}{T^2}\right)T \\ &= \frac{2h}{T} \end{aligned}

And that’s that – with the desired apex and time to it known, the necessary impulse and downwards acceleration have been derived.

Suppose you don’t want to fiddle with gravity just to get a nice-feeling jump. The acceleration is already set and you just want a particular height without caring too much about how quickly or not-quickly that happens. The expression for this case was pretty much already derived:

\begin{aligned} u^2 &= 2gh \\ u &= \sqrt{2gh} \end{aligned}

Every other combination of the parameters probably isn’t important enough to enumerate explicitly, but can be derived pretty easily if necessary.

And what about the case where the height should vary with how long the jump key is held down? The proper way to handle this scenario is to not do it. That mechanic just reduces to the annoyance of always holding the button down regardless, except that one part where the level designer felt they had to make use of their “mechanic” *somehow*, so there’s a low ceiling with spikes or something, and then players have the even greater annoyance of trying not to hold it for too long but they’ll still hold it just a touch too long and get frustrated anyway, so just don’t do it. Build levels around a single, standard jump height and let the gameplay flow. Yes I have opinions.

Now what about numeric integration? Everything above is fine in the land of spherical cows, but suppose the state of the world is updated on a frame-by-frame basis and you don’t have anything resembling an analytical solution with ½gt^{2}. Do the formulae hold up?

They shouldn’t be too far off, because with constant acceleration independent of any current state, there’s no real way for errors to accumulate. Even with semi-implicit Euler integration like so:

```
vel += acc * dt;
pos += vel * dt;
```

Given `acc`

is a constant −*g* and `vel`

starts at *u*, then after the *n* timesteps `vel`

will be `u − n*g*dt`

. Which means the accumulated change in `pos`

will be

\begin{aligned} &(u - g\, dt)\, dt\\ +\;&(u - 2g\, dt)\, dt\\ +\;&\dots\\ +\;&(u - ng\, dt)\, dt \end{aligned}

As an arithmetic series, this sum is equal to

\begin{aligned} &\frac n 2 \big[ (u-g\, dt)\,dt + (u-ng\, dt)\,dt \big] \\ =\;& \frac{n\,dt}{2} \big[ u-g\, dt + u-ng\, dt \big] \\ =\;& \frac{n\,dt}{2} \big[ 2u - g\,dt - ng\,dt \big] \end{aligned}

The top of the jump should be once *n*·*g*·*dt* = *u*, thus *n* = *u*/*g*·*dt*.

\begin{aligned} \frac{n\,dt}{2} \big[ 2u - g\,dt - ng\,dt \big] &= \frac{u\,dt}{2g\,dt} \left[ 2u - g\,dt - \left(\frac{u}{g\,dt}\right) g\,dt \right] \\ &= \frac{u}{2g} [ 2u - g\,dt - u] \\ &= \frac{u}{2g} \big[ u - g\,dt \big] \\ &= \frac{u^2}{2g} - \frac{u\,dt}{2} \end{aligned}

The first part is the “proper” result, so there is a small error of *u*·*dt*/2, which will naturally decrease with smaller timesteps.

Trying to set the initial velocity to *u* + *g*·*dt* in an attempt to cancel the first timestep’s effect on it doesn’t actually help – it just makes the error term positive instead. However, using *u* + *g*·*dt*/2 seems to work fine. The internal representation of velocity at the apex will instead be slightly off (specifically, it will be *g*·*dt*/2 instead of 0), but that hardly matters because the next timestep makes that negative and still produces a descent.

So, at least in the case of semi-implicit Euler integration as above, a better formulation for *u* might be 2*h*/*T* + *h*·*dt*/*T*^{2}. Which is a pretty insignificant difference, and it’s unlikely all timesteps will all be equal anyway… unless you’re doing something with an accumulator, but really, numeric integration is a whole set of textbooks of its own.

There’s a lot of talk over how Celsius’s basis in water is more useful, how negative means freezing, how Fahrenheit’s zero is very roughly the coldest weather that occurs. But both zeroes are equally arbitrary. Anders Celsius used water; Daniel Fahrenheit used a mixture of ice, water and ammonium chloride for whatever reason.

But neither of these arbitrary points are at all close to zero temperature. The thermodynamic temperature of an object continues to reduce linearly, with no changes in behaviour, below these points.

There is a lower limit for temperature, and the Kelvin scale uses it. And that’s it, really. Imagine if rulers had their zero point some particular distance along them, such that the lower end of the scale were always some specific negative value. Why? Why would you use that?

An absolute scale is the only sensible choice for dealing with temperatures. What’s the efficiency of a Carnot engine operating between 25 ℃ and 925 ℃? Beats me. What’s the efficiency of a Carnot engine operating between 300 K and 1200 K? Well that’s just 75%.

Most people don’t need to determine these sorts of things. Hence, the second part: the relative scales have no useful merit of their own.

Discussion of temperature scales tends to focus around weather because that’s all temperature scales are usually used for. Fortunately, the Kelvin scale actually *does* have some decent applicability here too: 300 K (26.85 ℃) serves as a nice reference point for temperate temperature that verges on warm. Anything above 300 K is starting to get hot, and cooler than that is progressively fine, chilly, then freezing at 270 K.

In any other situation, the magnitude of numbers was arbitrary anyway. In one world, you pre-heat your oven to 220 ℃. In another, you pre-heat to 490 K. (in the third world, you use 430 ℉.) Nothing is lost with the Kelvin scale. And now, if you want to know the pressure of an ideal gas filling that oven, or how efficiently it could drive a heat engine, there are no awkward conversions involved. Handy, huh…

Another benefit presents itself: most people don’t have the degree symbol on their keyboard. They certainly don’t have a button for “℃” (that’s one character). But anyone can type a space followed by the letter “K”.

Some people (a fair few, actually) insist that such temperatures are spoken as “300 Kelvin”, and will complain if you say “kelvins”. I don’t know why. There’s no style guide advocating that.

Kelvins are an SI unit, like joules and metres. That means they form plurals just fine, and the full name of the unit is not capitalised.

The whole point is that it *is* an absolute scale, so you really can speak of temperatures as a proper quantity, rather than a relative difference from some arbitrary reference point.