Last year wasn’t great for my arbitrary “at least once a month” posting schedule, but was rather productive in terms of drawing. Below is a collection of most of those doodles and art-adjacent depictions, arranged roughly chronologically in English reading order.

Pithy closing remark.

]]>Previously I’ve shown how human cone cells certainly do not peak at red, green and blue frequencies. But those colours are only associated with the peaks; what would it mean to depict the entire range of responses as a single colour?

The approach used here is to treat the relative spectral sensitivities of each receptor as a visible spectral distribution. You could consider it a sum of all detected frequencies weighted by responsivity.

The result seems analogous to a redshift of sorts – it looks like L went from yellow-green to yellow; M went from blue-green to green; S went from purple to slightly-less-purple.

What is certainly true is that far less desaturation was needed to fit these within the sRGB gamut. In fact M, which had needed the most before, now needed none. I still desaturated everything uniformly though, with XYZ (0.05, 0.05, 0.05).

L | M | S | background | |

sRGB | 247, 245, 34 | 138, 249, 80 | 77, 35, 236 | 69, 62, 60 |

CSS value | `#f7f522` | `#8af950` | `#4d23ec` | `#453e3c` |

What does it mean in practice? Rather than representing cones with red, green and blue colours, you might consider something approximating the ones above. It doesn’t have to be exact, you could easily disregard the background, this is more for the purpose of illustration rather than reliably reproducing the resulting colours.

So a plot of cone fundamentals could be done as such:

Hopefully using a purple with more contrast.

The cone fundamentals for this exercise were the Stockman and Sharpe (2000) 2-degree fundamentals based on the Stiles & Burch (1959) 10-degree CMFs, sourced from http://www.cvrl.org/cones.htm

]]>Hypothesis: additive mixtures of lights average out the wavelengths to an intermediate colour.

Prediction: red and blue together will produce green.

Experimental: nope – it made purple.

Now, audience participation time – choose your own Conclusion!

- colours add by a mechanism somewhat more involved than their position along the visible spectrum.
- purple must be fake.

Humans see colour with

red,green, andbluecone cells, named for the colours they best detect.

Human cone peak sensitivities:

This is part of the reason why the labels L, M, and S (for “long”, “medium”, and “short”) are used exclusively in the literature: the traditional names for cone cells have almost no bearing on their actual receptivities.

The other part is to do with how each responds to a much broader range than a single peak, ranges that overlap so no receptor monopolises any of the interior region.

]]>Last year I was interested but didn’t have the time to dedicate; this year was going to be *different*. Then I almost forgot, was reminded about a week beforehand, forgot again and didn’t remember until the night of the 3rd of ~~Oc~~Inktober.

I aimed for something every few days, and it was going well until about the last week when life caught up (as it so annoyingly tends to) and I made a firm decision to focus entirely on other matters. (Life matters are also why there’s only been any mention here halfway into November.)

The point is, this is what there is to show for it all:

“Obnoxious watermark” because somebody insisted that one was too valuable to go on the internet. I kinda disagree but there’s an attempt to protect it anyway.

Takeaways from the experience:

- Focusing on texture alone can be quite fun or relaxing
- I can totally make arts if I put the time in. But I
*don’t wanna*…

Anyway it was nice to whittle down the “ideas to do” list by three items. That dragon one is good enough to deserve a re-do in colour, at some point.

Hrm, 83% of depicted animals (can you find the fifth/sixth?) have wings. I wonder if there’s a connection here…

]]>In terms of the engine and programming, the nature of this jump is determined by the values given to gravity and the initial impulse. But more practical is defining the height of the jump, and some quality that *could* be labelled the “floatiness coefficient”. Or it could just be the time to the top of the jump.

Standard advice is to experiment with some values until it feels good, but that’s lame. Let’s start from the basics, and quantify what impulse and gravity will yield the desired attributes instead. With calculus! You can gloss over the derivations for the resulting, calculusless formulae; I literally cannot stop you.

First the definitions:

- height above ground is given by
*x*. Maximum height attained is the constant parameter*h*. - time since the initial impulse is given by
*t*. The~~floatiness coefficient~~time of maximum height is the constant parameter*T*. - during the jump is only a constant downwards acceleration, the magnitude of which is the constant parameter
*g*. *v*is an alias for d*x*/d*t*.- the upwards velocity that causes an actual jump is the constant parameter
*u*.

The goal is to find *u*(*h*,*T*) and *g*(*h*,*T*).

Now, the maths. Acceleration, the sole thing governing motion here, has a few equivalent expressions:

\frac{\mathrm dv}{\mathrm dt} = \frac{\mathrm dv}{\mathrm dx}\,\frac{\mathrm dx}{\mathrm dt} = v \frac{\mathrm dv}{\mathrm dx} = -g

So:

\begin{aligned} \frac{\mathrm dv}{\mathrm dt} &= -g \\ -\mathrm dv &= g \,\mathrm dt \\ -\int_u^0 \mathrm dv &= \int_0^T g \,\mathrm dt \\ \left[v\right]_0^u &= \left[-gt\right]_0^T \\ u &= gT \end{aligned}

And:

\begin{aligned} v \frac{\mathrm dv}{\mathrm dx} &= -g \\ -v \,\mathrm dv &= g \,\mathrm dx \\ -\int_u^0v \,\mathrm dv &= \int_0^h g \,\mathrm dx \\ \left[\frac{v^2}{2}\right]_0^u &= \left[gx\right]_0^h \\ \frac{u^2}{2} &= gh \\ u^2 &= 2gh \end{aligned}

These two results together give the formula for *g*.

\begin{aligned} (gT)^2 &= 2gh \\ g^2T^2 &= 2gh \\ g &= \frac{2h}{T^2} \end{aligned}

And so, *u* in terms of *h* and *T*.

\begin{aligned} u &= gT \\ &= \left(\frac{2h}{T^2}\right)T \\ &= \frac{2h}{T} \end{aligned}

And that’s that – with the desired apex and time to it known, the necessary impulse and downwards acceleration have been derived.

Suppose you don’t want to fiddle with gravity just to get a nice-feeling jump. The acceleration is already set and you just want a particular height without caring too much about how quickly or not-quickly that happens. The expression for this case was pretty much already derived:

\begin{aligned} u^2 &= 2gh \\ u &= \sqrt{2gh} \end{aligned}

Every other combination of the parameters probably isn’t important enough to enumerate explicitly, but can be derived pretty easily if necessary.

And what about the case where the height should vary with how long the jump key is held down? The proper way to handle this scenario is to not do it. That mechanic just reduces to the annoyance of always holding the button down regardless, except that one part where the level designer felt they had to make use of their “mechanic” *somehow*, so there’s a low ceiling with spikes or something, and then players have the even greater annoyance of trying not to hold it for too long but they’ll still hold it just a touch too long and get frustrated anyway, so just don’t do it. Build levels around a single, standard jump height and let the gameplay flow. Yes I have opinions.

Now what about numeric integration? Everything above is fine in the land of spherical cows, but suppose the state of the world is updated on a frame-by-frame basis and you don’t have anything resembling an analytical solution with ½gt^{2}. Do the formulae hold up?

They shouldn’t be too far off, because with constant acceleration independent of any current state, there’s no real way for errors to accumulate. Even with semi-implicit Euler integration like so:

```
vel += acc * dt;
pos += vel * dt;
```

Given `acc`

is a constant −*g* and `vel`

starts at *u*, then after the *n* timesteps `vel`

will be `u − n*g*dt`

. Which means the accumulated change in `pos`

will be

\begin{aligned} &(u - g\, dt)\, dt\\ +\;&(u - 2g\, dt)\, dt\\ +\;&\dots\\ +\;&(u - ng\, dt)\, dt \end{aligned}

As an arithmetic series, this sum is equal to

\begin{aligned} &\frac n 2 \big[ (u-g\, dt)\,dt + (u-ng\, dt)\,dt \big] \\ =\;& \frac{n\,dt}{2} \big[ u-g\, dt + u-ng\, dt \big] \\ =\;& \frac{n\,dt}{2} \big[ 2u - g\,dt - ng\,dt \big] \end{aligned}

The top of the jump should be once *n*·*g*·*dt* = *u*, thus *n* = *u*/*g*·*dt*.

\begin{aligned} \frac{n\,dt}{2} \big[ 2u - g\,dt - ng\,dt \big] &= \frac{u\,dt}{2g\,dt} \left[ 2u - g\,dt - \left(\frac{u}{g\,dt}\right) g\,dt \right] \\ &= \frac{u}{2g} [ 2u - g\,dt - u] \\ &= \frac{u}{2g} \big[ u - g\,dt \big] \\ &= \frac{u^2}{2g} - \frac{u\,dt}{2} \end{aligned}

The first part is the “proper” result, so there is a small error of *u*·*dt*/2, which will naturally decrease with smaller timesteps.

Trying to set the initial velocity to *u* + *g*·*dt* in an attempt to cancel the first timestep’s effect on it doesn’t actually help – it just makes the error term positive instead. However, using *u* + *g*·*dt*/2 seems to work fine. The internal representation of velocity at the apex will instead be slightly off (specifically, it will be *g*·*dt*/2 instead of 0), but that hardly matters because the next timestep makes that negative and still produces a descent.

So, at least in the case of semi-implicit Euler integration as above, a better formulation for *u* might be 2*h*/*T* + *h*·*dt*/*T*^{2}. Which is a pretty insignificant difference, and it’s unlikely all timesteps will all be equal anyway… unless you’re doing something with an accumulator, but really, numeric integration is a whole set of textbooks of its own.

There’s a lot of talk over how Celsius’s basis in water is more useful, how negative means freezing, how Fahrenheit’s zero is very roughly the coldest weather that occurs. But both zeroes are equally arbitrary. Anders Celsius used water; Daniel Fahrenheit used a mixture of ice, water and ammonium chloride for whatever reason.

But neither of these arbitrary points are at all close to zero temperature. The thermodynamic temperature of an object continues to reduce linearly, with no changes in behaviour, below these points.

There is a lower limit for temperature, and the Kelvin scale uses it. And that’s it, really. Imagine if rulers had their zero point some particular distance along them, such that the lower end of the scale were always some specific negative value. Why? Why would you use that?

An absolute scale is the only sensible choice for dealing with temperatures. What’s the efficiency of a Carnot engine operating between 25 ℃ and 925 ℃? Beats me. What’s the efficiency of a Carnot engine operating between 300 K and 1200 K? Well that’s just 75%.

Most people don’t need to determine these sorts of things. Hence, the second part: the relative scales have no useful merit of their own.

Discussion of temperature scales tends to focus around weather because that’s all temperature scales are usually used for. Fortunately, the Kelvin scale actually *does* have some decent applicability here too: 300 K (26.85 ℃) serves as a nice reference point for temperate temperature that verges on warm. Anything above 300 K is starting to get hot, and cooler than that is progressively fine, chilly, then freezing at 270 K.

In any other situation, the magnitude of numbers was arbitrary anyway. In one world, you pre-heat your oven to 220 ℃. In another, you pre-heat to 490 K. (in the third world, you use 430 ℉.) Nothing is lost with the Kelvin scale. And now, if you want to know the pressure of an ideal gas filling that oven, or how efficiently it could drive a heat engine, there are no awkward conversions involved. Handy, huh…

Another benefit presents itself: most people don’t have the degree symbol on their keyboard. They certainly don’t have a button for “℃” (that’s one character). But anyone can type a space followed by the letter “K”.

Some people (a fair few, actually) insist that such temperatures are spoken as “300 Kelvin”, and will complain if you say “kelvins”. I don’t know why. There’s no style guide advocating that.

Kelvins are an SI unit, like joules and metres. That means they form plurals just fine, and the full name of the unit is not capitalised.

The whole point is that it *is* an absolute scale, so you really can speak of temperatures as a proper quantity, rather than a relative difference from some arbitrary reference point.

This video was too good to not share. It covers the entire history of Earth from 4.5 billion years ago to today, showing notable geological and biological events the whole way.

There’s a fair abundance of available information on continental drift, atmospheric composition, evolution etc. changing over time, but it’s great to see it together in context.

It gets a little hectic from the end of the Cryogenian, so it might have been nice to see the time rate drop a bit lower than ×190 000 000 000 000 for the time since the Cambrian explosion.

Earth is certainly a unique and interesting planet. Not only does it possess the most complex geologic system known to exist, but also no shortage of organisms more than willing to shake things up quite regularly…

]]>A lot of people don’t like it because it bends the cards. It’s worth pointing out that while the “showy” shuffles with a flashy finish will bend the cards quite severely, it’s quite doable to riffle a deck with only minimal bending on the corners that doesn’t leave any permanent damage.

Regardless, here’s a simple method for performing a riffle (technically more of a weave, but with less effort) that involves absolutely no bending whatsoever. It’s trivially fast, easy, and done entirely in your hands – very effective for fidgeting when reading slabs of text anywhere.

**Step 1**: hold the edges of the deck between your thumb and fingers. Either pair of edges will work, but holding the short edges probably makes the most sense.

**Step 2**: hold the other pair of edges between the fingers of your other hand, then slant the deck to the side. Ideally the bottom should migrate to base of your thumb.

**Step 3**: splay the bottom of the deck outwards by rotating your hands upwards while keeping pressure with your thumb.

**Step 4**: pull about half the cards out with your other hand. You should be able to maintain the splayed structure.

**Step 5**: push the splayed edges into each other. You might need to slide them back and forwards a bit, but the two halves should fit together with very little effort.

At this point, just square up the edges and you’re done. The main problem with shuffling like this, as with any in-hand shuffle, is that the bottom cards can be revealed if you’re not careful. A cut or two on the table surface is always good practice for any shuffle.

And now I feel like I must have illustrated WikiHow pages in a former life…

]]>Well if you’re going to do something poorly, it’s nothing short of dishonest to go any less than *all the way*. In that vein, let’s define a system of units, analogous to the SI, where everything is based on how things change over time rather than the quantities themselves. These units will take the names of SI units wherever they are equivalent.

First and foremost is **frequency**. Quantities changing over time have little use without some concept of time itself. The unit of frequency is the Hertz (Hz), produced by defining the frequency of radiation produced by the ground-state hyperfine transition of caesium-133 (Δ`v`_{Cs}) as 9 192 631 770 Hz. Trivially time, measured in seconds, is derived as Hz^{-1}.

The next base unit measures **velocity**. We’ll call this unit the “ved” (v), with the speed of light in vacuum (`c`) made to be 299 792 458 v. Thus a metre is derived as v Hz^{-1}, or v s if you really want.

Next is **power**. The value of a Watt (W) is realised by defining the Planck constant (`h`) as 6.626 070 15 × 10^{-34} W Hz^{-2}, which then opens the way to numerous other mechanical units as described further below.

Onwards to **electric current**, where this all started. The Ampere (A) is found when the elementary charge (`e`) is made 1.602 176 634 × 10^{-19} A Hz^{-1}. There’s not even a need to try here; it’s already silly.

And the last of the “important”/”necessary”/”sensible” units, **thermodynamic temperature**. There aren’t really units to do with rates of temperature; heat transfer for instance is built around energy. So the Kelvin (K) is what happens when the Boltzmann constant (`k`_{B}) is 1.380 649 × 10^{-23} W Hz^{-1} K^{-1}.

But what of **amount of substance** or **luminous intensity**? Amount of substance is just a quantity – it’s a dimension if you say it is, but doesn’t really need to be. In my mind it’s far better to be more specific, like energy per bond or mass per atom, rather than quantifying “large amount of *something*, but I won’t explicitly say what.” And luminous intensity is radiant intensity as perceived by the human visual system, which can’t even be based in physical constants. So as far as I’m concerned, neither of these are a necessary basis for a practical system of units.

So, to summarise the foundations of the Units of Rate:

Measure | Unit | Definition |
---|---|---|

Frequency | Hz | Δv_{Cs} ∕ 9 192 631 770 |

Velocity | v | c ∕ 299 792 458 |

Power | W | (h ∕ 6.626 070 15 × 10^{-34}) × (Δv_{Cs} ∕ 9 192 631 770)^{2} |

Electric current | A | (e ∕ 1.602 176 634 × 10^{-19}) × (Δv_{Cs} ∕ 9 192 631 770) |

Thermodynamic temperature | K | (1.380 649 × 10^{-23} ∕ k_{B}) × (h ∕ 6.626 070 15 × 10^{-34}) × (Δv_{Cs} ∕ 9 192 631 770) |

And some derived units:

Measure | Unit | Definition |
---|---|---|

Time | second | Hz^{-1} |

Length | metre | v Hz^{-1} |

Energy | joule | W Hz^{-1} |

Pressure | pascal | W v^{-3} Hz^{-2} |

Force | newton | W v^{-1} |

Mass | kilogram | W v^{-2} Hz^{-1} |

Electric charge | coulomb | A Hz^{-1} |

Electric potential | volt | W A^{-1} |

Magnetic flux density | tesla | W A^{-1} v^{-2} Hz |

Completely cromulent.

]]>