Humans see colour with

red,green, andbluecone cells, named for the colours they best detect.

Human cone peak sensitivities:

This is part of the reason why the labels L, M, and S (for “long”, “medium”, and “short”) are used exclusively in the literature: the traditional names for cone cells have almost no bearing on their actual receptivities.

The other part is to do with how each responds to a much broader range than a single peak, ranges that overlap so no receptor monopolises any of the interior region.

]]>Last year I was interested but didn’t have the time to dedicate; this year was going to be *different*. Then I almost forgot, was reminded about a week beforehand, forgot again and didn’t remember until the night of the 3rd of ~~Oc~~Inktober.

I aimed for something every few days, and it was going well until about the last week when life caught up (as it so annoyingly tends to) and I made a firm decision to focus entirely on other matters. (Life matters are also why there’s only been any mention here halfway into November.)

The point is, this is what there is to show for it all:

“Obnoxious watermark” because somebody insisted that one was too valuable to go on the internet. I kinda disagree but there’s an attempt to protect it anyway.

Takeaways from the experience:

- Focusing on texture alone can be quite fun or relaxing
- I can totally make arts if I put the time in. But I
*don’t wanna*…

Anyway it was nice to whittle down the “ideas to do” list by three items. That dragon one is good enough to deserve a re-do in colour, at some point.

Hrm, 83% of depicted animals (can you find the fifth/sixth?) have wings. I wonder if there’s a connection here…

]]>In terms of the engine and programming, the nature of this jump is determined by the values given to gravity and the initial impulse. But more practical is defining the height of the jump, and some quality that *could* be labelled the “floatiness coefficient”. Or it could just be the time to the top of the jump.

Standard advice is to experiment with some values until it feels good, but that’s lame. Let’s start from the basics, and quantify what impulse and gravity will yield the desired attributes instead. With calculus! You can gloss over the derivations for the resulting, calculusless formulae; I literally cannot stop you.

First the definitions:

- height above ground is given by
*x*. Maximum height attained is the constant parameter*h*. - time since the initial impulse is given by
*t*. The~~floatiness coefficient~~time of maximum height is the constant parameter*T*. - during the jump is only a constant downwards acceleration, the magnitude of which is the constant parameter
*g*. *v*is an alias for d*x*/d*t*.- the upwards velocity that causes an actual jump is the constant parameter
*u*.

The goal is to find *u*(*h*,*T*) and *g*(*h*,*T*).

Now, the maths. Acceleration, the sole thing governing motion here, has a few equivalent expressions:

\frac{\mathrm dv}{\mathrm dt} = \frac{\mathrm dv}{\mathrm dx}\,\frac{\mathrm dx}{\mathrm dt} = v \frac{\mathrm dv}{\mathrm dx} = -g

So:

\begin{aligned} \frac{\mathrm dv}{\mathrm dt} &= -g \\ -\mathrm dv &= g \,\mathrm dt \\ -\int_u^0 \mathrm dv &= \int_0^T g \,\mathrm dt \\ \left[v\right]_0^u &= \left[-gt\right]_0^T \\ u &= gT \end{aligned}

And:

\begin{aligned} v \frac{\mathrm dv}{\mathrm dx} &= -g \\ -v \,\mathrm dv &= g \,\mathrm dx \\ -\int_u^0v \,\mathrm dv &= \int_0^h g \,\mathrm dx \\ \left[\frac{v^2}{2}\right]_0^u &= \left[gx\right]_0^h \\ \frac{u^2}{2} &= gh \\ u^2 &= 2gh \end{aligned}

These two results together give the formula for *g*.

\begin{aligned} (gT)^2 &= 2gh \\ g^2T^2 &= 2gh \\ g &= \frac{2h}{T^2} \end{aligned}

And so, *u* in terms of *h* and *T*.

\begin{aligned} u &= gT \\ &= \left(\frac{2h}{T^2}\right)T \\ &= \frac{2h}{T} \end{aligned}

And that’s that – with the desired apex and time to it known, the necessary impulse and downwards acceleration have been derived.

Suppose you don’t want to fiddle with gravity just to get a nice-feeling jump. The acceleration is already set and you just want a particular height without caring too much about how quickly or not-quickly that happens. The expression for this case was pretty much already derived:

\begin{aligned} u^2 &= 2gh \\ u &= \sqrt{2gh} \end{aligned}

Every other combination of the parameters probably isn’t important enough to enumerate explicitly, but can be derived pretty easily if necessary.

And what about the case where the height should vary with how long the jump key is held down? The proper way to handle this scenario is to not do it. That mechanic just reduces to the annoyance of always holding the button down regardless, except that one part where the level designer felt they had to make use of their “mechanic” *somehow*, so there’s a low ceiling with spikes or something, and then players have the even greater annoyance of trying not to hold it for too long but they’ll still hold it just a touch too long and get frustrated anyway, so just don’t do it. Build levels around a single, standard jump height and let the gameplay flow. Yes I have opinions.

Now what about numeric integration? Everything above is fine in the land of spherical cows, but suppose the state of the world is updated on a frame-by-frame basis and you don’t have anything resembling an analytical solution with ½gt^{2}. Do the formulae hold up?

They shouldn’t be too far off, because with constant acceleration independent of any current state, there’s no real way for errors to accumulate. Even with semi-implicit Euler integration like so:

```
vel += acc * dt;
pos += vel * dt;
```

Given `acc`

is a constant −*g* and `vel`

starts at *u*, then after the *n* timesteps `vel`

will be `u − n*g*dt`

. Which means the accumulated change in `pos`

will be

\begin{aligned} &(u - g\, dt)\, dt\\ +\;&(u - 2g\, dt)\, dt\\ +\;&\dots\\ +\;&(u - ng\, dt)\, dt \end{aligned}

As an arithmetic series, this sum is equal to

\begin{aligned} &\frac n 2 \big[ (u-g\, dt)\,dt + (u-ng\, dt)\,dt \big] \\ =\;& \frac{n\,dt}{2} \big[ u-g\, dt + u-ng\, dt \big] \\ =\;& \frac{n\,dt}{2} \big[ 2u - g\,dt - ng\,dt \big] \end{aligned}

The top of the jump should be once *n*·*g*·*dt* = *u*, thus *n* = *u*/*g*·*dt*.

\begin{aligned} \frac{n\,dt}{2} \big[ 2u - g\,dt - ng\,dt \big] &= \frac{u\,dt}{2g\,dt} \left[ 2u - g\,dt - \left(\frac{u}{g\,dt}\right) g\,dt \right] \\ &= \frac{u}{2g} [ 2u - g\,dt - u] \\ &= \frac{u}{2g} \big[ u - g\,dt \big] \\ &= \frac{u^2}{2g} - \frac{u\,dt}{2} \end{aligned}

The first part is the “proper” result, so there is a small error of *u*·*dt*/2, which will naturally decrease with smaller timesteps.

Trying to set the initial velocity to *u* + *g*·*dt* in an attempt to cancel the first timestep’s effect on it doesn’t actually help – it just makes the error term positive instead. However, using *u* + *g*·*dt*/2 seems to work fine. The internal representation of velocity at the apex will instead be slightly off (specifically, it will be *g*·*dt*/2 instead of 0), but that hardly matters because the next timestep makes that negative and still produces a descent.

So, at least in the case of semi-implicit Euler integration as above, a better formulation for *u* might be 2*h*/*T* + *h*·*dt*/*T*^{2}. Which is a pretty insignificant difference, and it’s unlikely all timesteps will all be equal anyway… unless you’re doing something with an accumulator, but really, numeric integration is a whole set of textbooks of its own.

There’s a lot of talk over how Celsius’s basis in water is more useful, how negative means freezing, how Fahrenheit’s zero is very roughly the coldest weather that occurs. But both zeroes are equally arbitrary. Anders Celsius used water; Daniel Fahrenheit used a mixture of ice, water and ammonium chloride for whatever reason.

But neither of these arbitrary points are at all close to zero temperature. The thermodynamic temperature of an object continues to reduce linearly, with no changes in behaviour, below these points.

There is a lower limit for temperature, and the Kelvin scale uses it. And that’s it, really. Imagine if rulers had their zero point some particular distance along them, such that the lower end of the scale were always some specific negative value. Why? Why would you use that?

An absolute scale is the only sensible choice for dealing with temperatures. What’s the efficiency of a Carnot engine operating between 25 ℃ and 925 ℃? Beats me. What’s the efficiency of a Carnot engine operating between 300 K and 1200 K? Well that’s just 75%.

Most people don’t need to determine these sorts of things. Hence, the second part: the relative scales have no useful merit of their own.

Discussion of temperature scales tends to focus around weather because that’s all temperature scales are usually used for. Fortunately, the Kelvin scale actually *does* have some decent applicability here too: 300 K (26.85 ℃) serves as a nice reference point for temperate temperature that verges on warm. Anything above 300 K is starting to get hot, and cooler than that is progressively fine, chilly, then freezing at 270 K.

In any other situation, the magnitude of numbers was arbitrary anyway. In one world, you pre-heat your oven to 220 ℃. In another, you pre-heat to 490 K. (in the third world, you use 430 ℉.) Nothing is lost with the Kelvin scale. And now, if you want to know the pressure of an ideal gas filling that oven, or how efficiently it could drive a heat engine, there are no awkward conversions involved. Handy, huh…

Another benefit presents itself: most people don’t have the degree symbol on their keyboard. They certainly don’t have a button for “℃” (that’s one character). But anyone can type a space followed by the letter “K”.

Some people (a fair few, actually) insist that such temperatures are spoken as “300 Kelvin”, and will complain if you say “kelvins”. I don’t know why. There’s no style guide advocating that.

Kelvins are an SI unit, like joules and metres. That means they form plurals just fine, and the full name of the unit is not capitalised.

The whole point is that it *is* an absolute scale, so you really can speak of temperatures as a proper quantity, rather than a relative difference from some arbitrary reference point.

This video was too good to not share. It covers the entire history of Earth from 4.5 billion years ago to today, showing notable geological and biological events the whole way.

There’s a fair abundance of available information on continental drift, atmospheric composition, evolution etc. changing over time, but it’s great to see it together in context.

It gets a little hectic from the end of the Cryogenian, so it might have been nice to see the time rate drop a bit lower than ×190 000 000 000 000 for the time since the Cambrian explosion.

Earth is certainly a unique and interesting planet. Not only does it possess the most complex geologic system known to exist, but also no shortage of organisms more than willing to shake things up quite regularly…

]]>A lot of people don’t like it because it bends the cards. It’s worth pointing out that while the “showy” shuffles with a flashy finish will bend the cards quite severely, it’s quite doable to riffle a deck with only minimal bending on the corners that doesn’t leave any permanent damage.

Regardless, here’s a simple method for performing a riffle (technically more of a weave, but with less effort) that involves absolutely no bending whatsoever. It’s trivially fast, easy, and done entirely in your hands – very effective for fidgeting when reading slabs of text anywhere.

**Step 1**: hold the edges of the deck between your thumb and fingers. Either pair of edges will work, but holding the short edges probably makes the most sense.

**Step 2**: hold the other pair of edges between the fingers of your other hand, then slant the deck to the side. Ideally the bottom should migrate to base of your thumb.

**Step 3**: splay the bottom of the deck outwards by rotating your hands upwards while keeping pressure with your thumb.

**Step 4**: pull about half the cards out with your other hand. You should be able to maintain the splayed structure.

**Step 5**: push the splayed edges into each other. You might need to slide them back and forwards a bit, but the two halves should fit together with very little effort.

At this point, just square up the edges and you’re done. The main problem with shuffling like this, as with any in-hand shuffle, is that the bottom cards can be revealed if you’re not careful. A cut or two on the table surface is always good practice for any shuffle.

And now I feel like I must have illustrated WikiHow pages in a former life…

]]>Well if you’re going to do something poorly, it’s nothing short of dishonest to go any less than *all the way*. In that vein, let’s define a system of units, analogous to the SI, where everything is based on how things change over time rather than the quantities themselves. These units will take the names of SI units wherever they are equivalent.

First and foremost is **frequency**. Quantities changing over time have little use without some concept of time itself. The unit of frequency is the Hertz (Hz), produced by defining the frequency of radiation produced by the ground-state hyperfine transition of caesium-133 (Δ`v`_{Cs}) as 9 192 631 770 Hz. Trivially time, measured in seconds, is derived as Hz^{-1}.

The next base unit measures **velocity**. We’ll call this unit the “ved” (v), with the speed of light in vacuum (`c`) made to be 299 792 458 v. Thus a metre is derived as v Hz^{-1}, or v s if you really want.

Next is **power**. The value of a Watt (W) is realised by defining the Planck constant (`h`) as 6.626 070 15 × 10^{-34} W Hz^{-2}, which then opens the way to numerous other mechanical units as described further below.

Onwards to **electric current**, where this all started. The Ampere (A) is found when the elementary charge (`e`) is made 1.602 176 634 × 10^{-19} A Hz^{-1}. There’s not even a need to try here; it’s already silly.

And the last of the “important”/”necessary”/”sensible” units, **thermodynamic temperature**. There aren’t really units to do with rates of temperature; heat transfer for instance is built around energy. So the Kelvin (K) is what happens when the Boltzmann constant (`k`_{B}) is 1.380 649 × 10^{-23} W Hz^{-1} K^{-1}.

But what of **amount of substance** or **luminous intensity**? Amount of substance is just a quantity – it’s a dimension if you say it is, but doesn’t really need to be. In my mind it’s far better to be more specific, like energy per bond or mass per atom, rather than quantifying “large amount of *something*, but I won’t explicitly say what.” And luminous intensity is radiant intensity as perceived by the human visual system, which can’t even be based in physical constants. So as far as I’m concerned, neither of these are a necessary basis for a practical system of units.

So, to summarise the foundations of the Units of Rate:

Measure | Unit | Definition |
---|---|---|

Frequency | Hz | Δv_{Cs} ∕ 9 192 631 770 |

Velocity | v | c ∕ 299 792 458 |

Power | W | (h ∕ 6.626 070 15 × 10^{-34}) × (Δv_{Cs} ∕ 9 192 631 770)^{2} |

Electric current | A | (e ∕ 1.602 176 634 × 10^{-19}) × (Δv_{Cs} ∕ 9 192 631 770) |

Thermodynamic temperature | K | (1.380 649 × 10^{-23} ∕ k_{B}) × (h ∕ 6.626 070 15 × 10^{-34}) × (Δv_{Cs} ∕ 9 192 631 770) |

And some derived units:

Measure | Unit | Definition |
---|---|---|

Time | second | Hz^{-1} |

Length | metre | v Hz^{-1} |

Energy | joule | W Hz^{-1} |

Pressure | pascal | W v^{-3} Hz^{-2} |

Force | newton | W v^{-1} |

Mass | kilogram | W v^{-2} Hz^{-1} |

Electric charge | coulomb | A Hz^{-1} |

Electric potential | volt | W A^{-1} |

Magnetic flux density | tesla | W A^{-1} v^{-2} Hz |

Completely cromulent.

]]>By far the most common chromaticity diagrams represent the x and y axes of the CIE 1931 xyY space, a simple transformation of CIE XYZ that preserves the ratios between primaries and isolates luminance to a single axis that can be ignored for the purposes of chromaticity. The reason this particular space is so abundant is that CIE XYZ is well-defined from physical phenomena, and forms the basis of almost every other space.

Most (if not all) of these diagrams will plot *x* and *y* on perpendicular axes without mentioning *z* at all, since it is effectively redundant and calculated by 1−*x*−*y*. However, as a ratio of three values, I think there’s value in emphasising the effect of *z* rather than relegating it to the unlabelled origin, and placing the points of 100% X, Y, and Z equidistant from each other to form an equilateral triangle – even if this is less immediately useful when given a particular pair of xy coordinates. The resulting shape is less skewed and okay I’ll just get on with it:

So what does everything here mean? First and foremost the big triangle is the entire range (“gamut”) of chromaticities within the XYZ colour space. Again, luminance is absent; any given chromaticity could be imperceptibly dark or retina-searingly bright. The red, green, and blue lines represent equal proportions of X, Y, and Z respectively. For example, every chromaticity comprised of 30% Y is along a horizontal line 30% up from the bottom edge, and 60% X is 60% of the way from the Y-Z edge to the X corner. It reads exactly like a standard xy diagram that has been scaled down vertically to 87% and skewed 30° right.

Each corner represents “light” that is entirely comprised of the labelled XYZ primary, with an equal amount of all three in the middle. I say “light” because these primaries would somehow require a negative amount of light – they’re not just “invisible”; they literally do not exist. However, these imaginary colours can be mathematically “mixed” to yield the entire range of *real* colours, shown by the tongue-shaped area in the middle.

The curved boundary of this shape is the spectral locus – every colour produced by a single wavelength of light. Of note is that no mixture of real colours can ever perfectly replicate something on this boundary (though it can get imperceptibly similar along the straighter sections). Every monochromatic light is unique. As a result, whatever display you’re looking at can’t correctly produce them, so this region has been uniformly desaturated as to be correct relative to the grey behind it.

Your display can produce real colours though, which the inner triangle approximates. More accurately, this is the full range of chromaticities that can be encoded by the sRGB colour space of the image, which is mostly what typical displays can represent too. Thus this triangle is the only part of the diagram drawn with the actual chromaticities it refers to. Again, luminance is ignored, so for instance, the grey part equally refers to white and black alike.

I feel inclined to point out that the rendering above is one of the most accurate CIE chromaticity diagrams you’ll find. Others will neglect sRGB gamma which leads to problems all round, or do weird things with brightness to produce misleading artefacts around the middle. They also desaturate (or worse, clamp) out-of-gamut colours by a varying amount, leading to “rays” of a single colour emanating from the centre. Don’t fret if this means little to you; the point is that I’m right and everybody else is wrong, *obviously*.

And this isn’t just some internet nobody’s conception of human behaviour – they’re all citing Kruger & Dunning (1999) (but usually less explicitly). Well, since we know the source, let’s find a copy of the original data just for completeness’s sake.

Hm. That’s… quite different. There’s no trough at all, just a mostly-steady line (two of the four studies had an upwards incline, but all together it’s not enough to make such clear inferences).

This lays the basis of what I shall refer to as the “meta-Dunning-Kruger effect”:

people who know the least about the experimentation and results of Kruger and Dunning are disproportionately inclined to make authoritative claims about it.

(while looking for the sample images above, I found this post which laments the exact same trend. Dammit, ninja’d!)

So first of all, what is the actual trend described by the eponymous Effect? Well I would implore you to read the actual paper, *Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments*, because it’s quite interesting on its own and this way you’re surest not to fall victim to the meta-Dunning-Kruger effect yourself. But you’re a busy person so I’ll summarise it thusly:

people who perform poorly at a task tend to

rankthemselves significantly higher than their actual position, and those who are most competent tend to rank themselves lower.

Emphasis on “rank” is not undue: the people who do worst do *not* (necessarily) believe themselves to be exceptionally good; they just think they’re *better than* most others. And notably, those at the absolute top are not “enlightened” of their true ability – they still think a quarter of people would do better (this is called impostor syndrome).

But the most notable thing is that there is *no* recorded “Valley of Despair” – participants’ perceived performances are instead remarkably *consistent* across all abilities, and I think this is a far more fascinating insight into human cognitive biases (as well as being backed by actual experimental data). So those at the bottom *are* aware of things they don’t know, but still think whatever they do have is enough to give them an edge over the others they imagine don’t have that slight ability.

What’s impressive is just how *pervasive* this “Valley of Despair” is. That it and a “Mount Stupid” are given the same name every time shows how often “people these days” will quickly copy something they saw on the internet with no independent effort of their own.

I suspect this trend of “Mount Stupid” was instigated by SMBC comics, but it’s worth noting that that is a comic making a joke which itself makes zero reference to the work of Kruger & Dunning. So he gets a begrudging pass, but everyone blindly labelling that “Dunning-Kruger” still fails. Seriously, some images actually cite the paper by its title and even by the date and *journal* of publication but apparently didn’t bother to actually, you know, check what they were citing. (the first example image up the top claims they won a “Nobel Prize Psychology” – an award that has never existed. Details, details; the prestigious Nobel Prize and that comedic Ig Nobel Prize are similar enough.)

What genuinely sucks about the work of Kruger & Dunning as purported by its perennial proponents is how it is used as a quick’n’easy dismissal of genuine expertise: “oh my, you’re mighty sure about this aren’t you? Clearly you know absolutely nothing and I, recognising my faults, am the actual knowledgeable one here.” Just one more example of anti-intellectualism to add to the list.

The meta-Dunning-Kruger effect really is quite ironic and describes itself well: as those advocating the Dunning-Kruger effect would eagerly tell you, knowing just the name of something would make someone extremely confident about the details of what it represents.

]]>These feathers didn’t come from just nowhere, and it turns out that at least a dozen dinosaur taxa more basal than birds are preserved with some sort of filamentous integumentary structures (that is to say, structures based in the integument that resemble filaments) which bear a remarkable homology to avian feathers! Just how far back do these “proto-feathers”, henceforth “feathers”, go? Here’s the story so far:

Some time after birds are invented, humans discovered both birds and feathers, and concluded that birds had (and have!) feathers.

The holotype of *Archaeopteryx*, a single feather, was found in 1861 to contain preserved remnants of a feather. This specimen, having been quarried from Tithonian limestone, reveals the existence of at least one feather 150 million years before the word “feather”.

Only a few years later, the rest of *Archaeopteryx* was described, revealing that feathers predated birds and were, in fact, present in at least one non-bird (and, contentiously at the time, also a dinosaur).

In the 1960s, the theropod *Deinonychus* was described and put forward as evidence that birds are also theropods. Further analyses support the inclusion, and throughout the 1970s feathers were believed to be present in all theropods.

*Sinosauropteryx* was described in 1996 alongside clear feather imprints, definitively bringing feathers to the coelurosaurs.

The ornithischian *Tianyulong* is found with, dubiously, a row of filamentous integumentary structures all along its back. Wait, an ornithischian? Everything else here is under Saurischia…

Despite uncertainty surrounding *Tianyulong*, *Kulindadromeus* comes along as another closely-related, fairly basal ornithischian with more clearly-preserved integumentary structures. If these are present in Ornithischia (or at least its basal members), then feathers must have originally been present in all of Dinosauria.

As recently as 2018, analysis of the pycnofibres (integumentary structures of pterosaurs) of exceptionally well-preserved anurognathids concludes they are “remarkably similar to feathers and feather-like structures in non-avian dinosaurs.” Which indicates that (primitive) feathers are quite possibly ancestral to dinosaurs, and their close relatives, and to dinosauromorphs altogether.

And who knows, maybe at some point an aphanosaur or close relative will be found preserved with a similar sort of covering. “Avemetatarsalia” is supposed to mean “bird metatarsals” (no, really!), but perhaps it shares even more with birds than immediately apparent.

At this point the term “feather” is really pushing it. But relaxing those standards even more, early precursors to feather development are indeed found in crocodiles and even squamates. According to this research, mammalian hair and ~~avian~~ ~~maniraptoran~~ ~~coelurosaurian~~ ~~theropodan~~ ~~dinosaurian~~ avemetatarsalian(⁇) feathers are both derived from the same anatomical features despite their manifest differences.

Perhaps even the flagella of bacteria will turn out to share some sort of phylogeny with feathers, at which point (being found present in prokaryotes) we’ll throw up our hands in resignation and proclaim that feathers are intrinsic to life and have existed since the very beginning. Or perhaps not.

]]>