Where we learn that taking something for granted may be the most normal thing to do.
This is a story about measurement, brought to you by who gets to define standards, what is average or what is normal, and the value that is placed on all of that.
Mathematics is itself a language; indeed it is the language par excellence of the exact sciences…What is more, mathematics has become a universal language, used everywhere in the world today, one of the very few universal languages that currently exist… Mathematical conventions, no less than poetical conventions, transmit meaning. Poetical conventions make it possible to convey impressions and arouse expectations; and by artfully manipulating semantic fields, they create imaginative contexts for them.
“Mathematics is the Poetry of Science” by Cédric Villani
(pg. 21-22)
I am, by far, not the first to approach these considerations around measurement, that bar by which we make decisions on if we have reached a goal, or if we are better than before, or if we can conclude that we are simply the best of all the rest. In the interest of readability whilst you read this when you’re taking a break for lunch or just shifting gears as you go from one activity of your day to the next, I am going to supply some solid facts here yet I definitely don’t want to submerge myself into the morass of the gray area. You know, that interval area in between the wholes, that area where we sometimes ‘round up’ or ‘round down’ because, supposedly, then the conclusion is easier to digest?
Cuz, sure sure, doing that ‘rounding’ thing makes sense when you want to get a quickee general measurement of something, a summary of sorts, but not so much when you are making a decision that’d have cascading and/or potential everlasting conclusions for others about that thing.
Rounding up/down makes sense if you’re in elementary school, before you become a big kid and tackle fractions or decimals. And you’re only working with paper and pen(cil). Or you’re in a time before computers. Or calculators.
But as grown adults, nah, I don’t want to be dismissive of the gray area because that is the exact place where details lurk, and the details are usually the interesting – and differentiating – facts that absolutely need to be taken into consideration. Especially if what you’re measuring has value and may become a standard of comparison.
Let’s get to it.
…measurement procedures could be placed on a continuum which stretched from representational at one end to pragmatic at the other. Extreme representational measurement involved establishing a mapping from objects and their relationships to numbers and their relationships.
“Measurement: A Very Short Introduction” by David J. Hand
(pg. 17)
When you seek the measurement of a thing or a person or a place, you are looking to assign a quantity – or summation of multiple quantities – to it. I’ve found, though, that it’s incredibly easy for it to momentarily tip over and accidentally be assigned some qualitative info – through the lens of a bias – to your quantitative conclusions. And that can really (a)skew 😉 your measurement.
Yeah, it’s challenging enough to focus on measuring with consistency and accuracy, especially if you’re relying on using the practice of others who have instituted some standardized way to measure. I know you don’t want to find yourself in the same sort of pickle that Lewis Fry Richardson found himself in.
In the process of studying the length of coastlines and the borderlines of countries, Richardson hypothesized that:
… the probability of war between neighboring states was proportional to the length of their common border… To test his idea, he set about collecting data on lengths of borders and was surprised to discover that there was considerable variation in the published data.…most borders and coastlines are not straight lines. Rather, they are squiggly meandering lines either following local geography or having “arbitrarily” been determined via politics, culture, or history.
“Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies” by Geoffrey West
(pg. 134-137)
Whoopsie.
Richardson was doing this work in, oh, 1951(!), so imagine all the things that had been concluded prior to that year, based on those incorrect measurements. Decisions that had been made related to policies and laws regarding the environment, politics, and, yeah, humanity. Whoopsie, indeed.
How about the form of measurement that, when it was first introduced to me in some classroom decades ago, I really truly distinctly remember having a visceral twitch of that-just-feels-wrong, deeeeep in my belly. Of course, I’m speaking of the bell curve.
Looking at Adolphe Quételet, you can see his intense interest in sociology through his mathematical studies. Though he employed the bell curve in his probability work, he also was conscious that it was a good tool for plotting events that could be shown as a distribution – as opposed to using a bell curve to make conclusions about the individuals that were being, well, plotted. He was one of the first mathematicians in his field to acknowledge that individual human behavior had to be considered before broad conclusions could be made.
According to one historian of probability theory, “Adolphe Quételet was among the few nineteenth-century statisticians who pursued a numerical social science of laws, not just of facts.”
“The Error of Truth: How History and Mathematics Came Together To Form Our Character and Shape Our Worldview” by Steven J. Osterlind
(pg. 171)
The bell-shape area of the curve is referred to as Normal Distribution, very much still used today in probability and statistical calculations, and in economics to approximate the probability of other distributions.
In a letter from French physicist Gabriel Lippmann to mathematician Henri Poincaré, Lippmann wrote:
“Everybody believes in the [bell curve]: the experimenters because they think it can be proved by mathematics; and the mathematicians because they believe it has been established by observation.”
~
In science, as in so many other spheres, we often choose to see what serves our interests.
“Alex’s Adventures In Numberland: Dispatches From The Wonderful World of Mathematics” by Alex Bellos
(pg. 374)
What the bell curve has become, especially in educational circles, is a defacto measuring of human performance, simply based on a grouping of repeatability, commonality, and consistency, which in turn drives the assumption that that bell-curve represents normal or average.
Here we are again, at that place of realization that, silly me and silly you, we’ve spent the majority of our years assuming – nay, accepting, nay, not even thinking to question that humans have been setting the standards of measurement objectively, for many a thing.
LOL in your face!
Bias said what?
Even slight or subtle variations in the conditions of measurement, multiplied over repeated observations, can grow to be a substantial influence on the outcome.
“The Error of Truth: How History and Mathematics Came Together To Form Our Character and Shape Our Worldview” by Steven J. Osterlind
(pg. 25)
When the measurement of anything is defined,
soon thereafter its value is defined, right?
Whether that’s because we’re making that value judgment as part of an act of comparison or we’re proclaiming the value we will be in receipt of due to attaining whatever was at the goal line of that measurable thing, there’s cemented within us to not only go along with ‘bigger is better’ but to never even question if the measurements that we are told are baseline were (are!) born from objectivity.
As with the bell curve and the concept of normal distributions, the acknowledgement and understanding that standardization and the conclusiveness of what is deemed to be ‘normal’ or ‘average’ is also wrought with a la dee dah ignorance regarding the conditions that were present when said normalness or averageness was first established.
There’s a well told story about early-20th century mathematician Henri Poincaré and some baguettes. It’s a story about measurement and the conditions that can affect what you’re measuring. I can remember the first time I read that story, down to where I was sitting – in SanFran, at a bar, by myself, on a business trip, somewhere around 2012, and I was drinking a glass of wine while reading Alex Bello’s Alex’s Adventures in Numberland.
Anyway, the story made me take into account how multiple conditions can become layered and then start to meld and influence each other, as well as influence the overall impact they have on the measurement itself. For the baguette at the bakery, what was the temperature of the oven vs that day’s temperature in the bakery and what about the temp outside of the bakery? How about conditions around time, everything from the time of day that the bread was baked to how long after it came out of the oven was it bought and put into a different environment (bakery to home of the buyer) to how long after it was originally baked was it actually eaten?
Without knowing, or at a minimum without even questioning, what the conditions were to deem (a measurable) something as normal or average or standard, why do we just accept that to be a fact, one that lives on into perpetuity and ‘just is’?
THEN, what if you are to insert some unexpected condition (by most) into a measurement equation? Say, a global pandemic. What sorts of things cascade out from a worldwide event that affect conditions so that what has been just accepted – and expected – measurably as the norm is, actually, a bunch of bunk under the current conditions?
Despite the promise of certainty that numbers provided the scientists of the Enlightenment, they were often not as certain as all that. Sometimes when the same thing was measured twice, it gave two different results. This was an awkward inconvenience for scientists aiming to find clear and direct explanations for natural phenomena. Galileo Galilei, for instance, noticed that when calculating distances of stars with his telescope, his results were prone to variation; and this variation was not due to a mistake in his calculations. Rather, it was because measuring was intrinsically fuzzy. Numbers were not as precise as they had hoped.
“Alex’s Adventures in Numberland; Dispatches From The Wonderful World of Mathematics” by Alex Bellos
(pg. 351)
These evolutionary errors are all external to the subject which has been measured; it’s not abou humans being sloppy with how they are measuring stuff. And of course, my topic here of questioning the soundness of a measuring standard that was established in a past century and applied to a human, forever and ever into perpetuity (like a woman’s dress sizes, or the quantifiability of what is a proper parent, or the appraisal of a house in a particular neighborhood) is a completely different discussion to have than if we were talking about measuring a standardized piece of furniture that a human would just, you know, sit at or sit upon.
“Measurement: A Very Short Introduction” by David J. HandWe can also distinguish between measuring to understand something, and measuring for operational purposes – to make a decision, improve process, and so on.
These fundamental aspects to measurement accuracy are sometimes called precision (how much repeated measurements fluctuate about a central value) and bias (any systematic departure from the underlying true value, affecting all of the repeated measurements). Different disciplines use other words for the same concepts – such as reliability and validity.
(pg. 75 and pg. 108)
In my past life, when I owned a 3D animation company, we visualized the development of all sorts of products, basing the initial form-factor on basic data sets that had been collected. The second stage of our development process would reference ethnographic data, centering the behaviors of the human consumer as the usability of the product was further developed. There became a point in this process – what we eventually coined as Ethnographic Animimation – where the empirical data might bely the data sets that we hadn’t ever questioned, or at least we hadn’t questioned the source of them. Those moments could feel revelatory.
Empiricism is the notion that all knowledge is derived from sensory experiences of the real world, rather than from theory or logic alone. Empirical science, then, proceeds by creating experiments to examine the real world. Data from such experiments allow us to build testable hypotheories that explain the data. In turn, we create more experiments to find additional data to support or disprove each hypothesis. And so on go the endless repeating, iterative methods of science.
“Notes on Complexity: A Scientific Theory of Connection, Consciousness, and Being”
by Neil Theise
(pg. 123 – 124)
Key word here: iterative. Ever changing and evolving. Unstagnant. Flowing and molding the shore from the falloff of the energy’s movement. Be it a river, a vast ocean, or the progression that humans + nature make through time.
Thanks math, you’re the best.
As you write, measurement means making errors. When measurements are made of the same thing, say the speed of Saturn moving across the sky, it is good to take many measurements. The mean of the measurements may be the best estimate you can get. But when we measure IQ of many people we are measuring many different things. What does average mean then? My hunch is that average IQ is as meaningless as average weight of a book in a library. See Rose, T., 2016. The end of average: How to succeed in a world that values sameness.
Oooh, I think I’ve heard of this book, The End of Average. I’m curious about it. Have you read it?
I have read and reread “The End of Average”. It gave me insights into mistakes we often make when analyzing something have tried to measure. “Looking for your keys where the light is good instead of where you dropped them.”
That’s great. It’s going on my book list ✅
Your best column yet. Will take at least one more reading to appreciate it.
Thank you for saying that, Gerald! What was it that you liked about it?
It is a subject I like. It sparks thinking. Took two readings to appreciate it.