# Cargo Cult Math

--

*on the use and abuse of math in finance, statistics, complexity science, and beyond.*

I have a pet theory that falls into the entirely made up discipline of *intellectual anthropology*. It’s close to scientism or ‘cargo cult science’, but not quite the same. More like, ‘mathematicism’, if there were such an analogous thing, but that’s a bit of a tongue twister, and so I prefer the ring of ‘cargo cult math’. I think people are obsessed with quantified taxonomy as a method of attempted understanding. They look at some set of entities, split them up into different made-up categories, attach quantificational definitions to these categories, and feel like they have achieved something of intellectual value. And it can be seductive because they may have actually identified different properties. But it can also be facile if they don’t explain why these differences exist in the first place, or if they mean anything, or why anybody should care. They act as if taxonomy is an end in itself, rather than a tool for further understanding, or the product of prior understanding.

That’s why I call it ‘mathematicism’ as analogous to ‘scientism’, or cargo cult math as opposed to cargo cult science, because it replicates the form of mathematics, but not the real function, since it belies a lack of understanding of the purpose of real mathematics. It *seems *like mathematics because it *seems *like what mathematicians do is make up definitions and then categorize things according to them. But they don’t do this at all — they identify relevantly different properties *first*, and then they deduce the underlying definitions that cause these properties. If you do this properly, you cannot fail to understand why the differences exist at a fairly fundamental level.* *When mathematicians taxonomize, this is the end result of something they are doing far more rigorously and purposefully — not for the pure (pointless) joy of creating taxonomies.

Ironically, perhaps, cargo cult math strips the real joy from mathematics. I can virtually guarantee that any mathematician will tell you that the most prevalent and irritating misunderstanding of their discipline is the idea that mathematics is not creative. That it is ‘analytical’ (Christ, what an awful weasel word) rather than ‘creative’, ‘expressive’, or ‘imaginative’, as if worthwhile creativity were possible without knowledge and understanding. I challenge anybody skeptical of this outlook to find a product of the human mind from the twentieth century more creative than Alan Turing’s *On Computable Numbers* or Claude Shannon’s *A Mathematical Theory of Communication. *Only from a base of profound creativity do worthwhile taxonomies emerge, and when they do, they are understood by their creators.

Mathematics is the language in which the understanding of consequence and causation is best coded and conveyed. I do not make these claims in any maximalist sense: I do not deny the genius behind what is more widely and traditionally thought of as ‘creative’ — *Guernica*,* Beloved, *and *The Second Piano Concerto *rank up there too from the twentieth century, but note that Picasso, Morrison, and Rachmaninov all *understood *what they were doing … The problem isn’t even that this causes me moral or aesthetic concern, although it does, but rather because I sincerely believe this precise, identifiable pathology causes tangible harm in the real world. We are in serious danger of living through an unrelenting explosion of cargo cult math, because the pure pointless joy of creating ever more complicated, and not-at-all understood, taxonomies is becoming cheaper and cheaper due to advances in cloud computing and artificial intelligence.

To be clear, I have no issue with artificial intelligence. It’s fascinating stuff, *as mathematics*, and it will surely be profound as a driver of economic productivity and human wellbeing. I am not arguing against practitioners who know what they are doing. I am arguing — much as I am sure the former would, even more forcefully — against practitioners who *don’t know *what they are doing, but who are doing it nonetheless because it is cheap, and getting cheaper. I have noted before the tasty irony that a field born out of the purest of pure mathematics is now powering a conceptual shift towards the ever-more applied. There I joyfully gave the positive case. Here I sternly make the negative case. We need to learn to identify cargo cult math to root out epidemics before they start. Is your child engaging in cargo cult math? How can you be sure? This is only partly a joke.

Once you grasp this, you will start seeing it everywhere. I called it ‘intellectual anthropology’ because I think it is a pathology of general thinking, unconfined to any specific area of inquiry. I will diagnose two areas in which cargo cult math has definitely taken hold, but only because I am relatively familiar with them. I welcome totally different case studies in the comments or on Twitter. These areas are: finance, via microeconomics, and statistics. I will conclude with an optimistic note on where cargo cult math is not only being opposed, but arguably is being overcome: complexity science, and a case study I find particularly fascinating on the natural scaling laws of biological organisms. I contemplated critically covering macroeconomics as well, but regular readers (if any exist) are likely bored of me talking about this by now. Look up Modern Monetary Theory if you think I’m exaggerating how simultaneously stupid, serious, and hilarious the consequences can be. In finance I’m moving on to the more applied. But remember, it’s going to be everywhere. Get vaccinated while there’s still time.

**Finance**

Alex Adamou and Ole Peters run the London Mathematical Laboratory, and I have had the pleasure of meeting both and discussing a number of their ideas. One of the research projects of the LML is to rethink the mathematics of microeconomics by more rigorously analyzing the extent to which economic observables are or are not ‘ergodic’.

Now this is a dense concept, drawn from physics, that will at first seem bizarre to apply to economics. But unlike my occasional ramble on Medium, the good people at the LML have worked through this theory meticulously. Here I provide only brief coverage of their vast work, and specifically tease out the implications for finance, and this is more a commercial for their work than anything else. The reader is encouraged to follow up on two links in particular. The first is legendary asset manager, financial commentator, and finance professor Michael Mauboussin interviewing Peters on this topic. It requires a sliver of background knowledge of finance but otherwise should be accessible. Peters does a great job of explaining the physics, and the point of the interview is to introduce these concepts from scratch to financial professionals. The second is the full lecture notes, in all their glory. I’d selfishly suggest as a reason to first tolerate me rather than going straight to their work is that Peters is exceptionally diplomatic to those he criticizes in the interview. I, on the other hand, see no need for such nonsense when the stakes are this high …

Economists often apply concepts from physics and sometimes this makes no sense. One example is a concept from statistical mechanics called ‘ergodicity’. Stated succinctly, an ergodic observable is one whose time and ensemble averages are equal. In other words, measuring an observable over a long time in a single system and at a single time over many systems produces two sets of measurements, whose average values are the same. This can happen in many ways. Usually, we imagine that the observable for each system ends up, after a long time, being drawn from a common and stable probability distribution. In effect, there is convergence to an equilibrium regime in which all trajectories look the same, statistically speaking.

Statistical physicists, starting with Boltzmann, developed the idea of ergodicity to justify using ensemble averages to model macroscopic quantities, such as pressure and temperature in fluids. Really, these are better described as time averages over bajillions of classically mechanical collisions. The latter are hard (read: impossible) to compute, while the former are a doddle once the stable probability distribution is known.

One upshot is that the ordering of observations, or, if you prefer, the direction of time, doesn’t matter when computing the average of an ergodic observable. Every system will, over enough time, visit every possible state with the same frequency as every other system. An observation can be dependent on previous observations but this dependence decays quickly. After a sufficient separation time, pairs of observations look causally unrelated — like independent draws from a fixed distribution.

A great deal of financial modeling uses techniques — most notably expectation values — which are appropriate only if the corresponding observables are ergodic. Typically, they are not. (Indeed, one trick is to find transformed observables which are, so that the techniques can be rescued.) In what is called ‘modern portfolio theory’, by which a scarily large proportion of asset allocation in the financial services industry is carried out, this is a particularly egregious error. Clearly, events that are represented by numbers in finance are causally dependent on one another and take place in a world in which time has a direction. Contrary to widespread belief, malignantly infectious in finance itself, financial capital is merely a representation of production capital. The numbers in finance are not a closed game, but reflect things in the real world, where causation is most definitely a thing (causation ‘has ontological status’ as a mathematician friend of mine once enjoyed quipping of whatever either was or was not a thing). Acting as if none of these totally obvious observations are true leads to making mathematical statements about finance that are internally consistent but *not even wrong*. They are too meaningless to be wrong. They are just gibberish.

One such observation concerns the Sharpe Ratio; the excess return of an investment over the ‘risk-free rate’ divided by its standard deviation, usually given an interpretation along the lines of*, return adjusted for risk*. (side note: in what world can a ‘rate’, as in a numerical representation of the compounding of capital over time due to reinvested profits, be ‘free’ of ‘risk’? Mark Spitznagel, are you listening? Do you have any thoughts or are you too mad to articulate them?) Peters’ and Adamou’s insight is that the Sharpe Ratio is not dimensionless. A dimensionless constant is one that has no units. All pure numbers are like this — pi, e, etc. — but so are things like the proton-electron mass ratio and the strong force characterization. You don’t need to have any idea what these physically mean (I don’t!), but the point of any *dimensionful *constant is that the number will change when you change the units. The (numerical) speed of light changes if you use feet rather than meters, or hours rather than seconds, and so while the speed of light in an absolute sense is important, any number associated with it cannot be fundamental. This is why you just set it to 1 in general relativity — there is no reason not to and it makes the algebra much easier.

The Sharpe Ratio, often held up as a kind of sacred, pure, and *fundamental *characteristic of the concept of ‘risk-adjusted returns’ *is not dimensionless, *meaning that it is not fundamental and can be changed based on the units of its dimensions. Its dimensions are one over the square root of time, or just ‘time’ for the purposes of this discussion. This is very dangerous territory to be plodding around in if we aren’t clear on how ‘time’ itself behaves in the system for which we have chosen a supposedly fundamental characterizing number, the dimensions of which are time …

Suppose, for example, that we think ‘the speed of light’ is foundationally meaningful as a dimensionless constant because we are silly geese. We then start changing the units in order to confuse ourselves. We change it from meters per second to centimeters per second, then again to millimeters, then micrometers and watch what happens to the number. OH MY GOD, IT’S GETTING BIGGER! It’s going faster and faster!

* pause to contain hysterical laughter and regain our faculties before getting super real *

Now consider the Sharpe ratio. Its units are one over sqrt(time). Let’s do something sneaky and change time from a year to a month. Then again to a week. Then to a day. OH MY GOD, IT’S GETTING SMALLER! The risk-adjusted returns are going down and down! This might seem rather obvious as both ‘risk’ and ‘returns’ are borne out over time. But we can be even sneakier and use leverage. Obviously if we have positive expected returns then leverage will magnify them. But will it change our *risk-adjusted *returns? No, it does not. The Sharpe Ratio does not change with leverage. I don’t want to diverge too far into the algebra of this, but for readers vaguely familiar with the concept, the Sharpe Ratio is the gradient of the efficient frontier, which is a straight line if a risk-free asset is available (and hence can be used to provide leverage). So applying leverage moves the portfolio up the efficient frontier with precisely the numerical effects on ‘risk’ and ‘return’ that does not change the Sharpe Ratio. So now what can we do? We can invest over as short a period as possible to get risk and return as low as possible, then juice returns with leverage, and apply repeatedly over time periods that compound back up to a full year. We should juice the units like this more often! Except that in this example, that’s exactly what has happened in real life, in trillions of dollars’ worth of capital allocation decisions over at least 40 years.

There are a number of dire philosophical errors implicit in this approach. One is the idea that the ‘risk’ of an investment decreases over shorter and shorter periods of time* and due to no other factor*. Nope, it turns out that leverage makes no difference to risk if we are investing over zero time (which ought to make you wonder what the word ‘investment’ even means, but forget about that for now) The other is that the appropriate average is drawn from the set of every possible outcome rather than the set of consequences of playing the same game repeatedly. What is the individual payoff in a game in which one million people each agree to one round of a one in ten thousand Russian roullette in exchange for $1000? It might not be so bad. What about if you, personally, committed to playing one million rounds of one in ten thousand Russian roulette and got $1000 every time you didn’t die? It doesn’t really matter exactly how much you value either game. What matters is realizing that they are different games. So too is a huge number of people each risking tiny possibilities of financial wipeout different to one person risking it a huge number of times in succession. The normal workings of an insurance company is different to the global financial crisis.

To tie this together mathematically, we can use leverage to drive the return rate of the ensemble-average portfolio to infinity, while keeping its risk-adjusted return rate within whatever bounds we choose. Yippee! Lever up! Never mind that the time-average of returns will converge to zero. No need to be a party pooper.

What a relief that time has no direction in finance and causation isn’t real! Otherwise, the math there could have gotten confusing really fast. I hate figuring out how to cancel infinities — is it L’Hôpital you use?

Sorry, enough math jokes. My point is basically that this whole thing *is* a massive math joke. It would be funnier if it wasn’t so dangerously stupid.

And it is really quite hard to describe just how stupid this is. It’s almost beyond stupid. It requires quite a high level of intelligence to make an assumption so wrong. It’s like somebody set out to make the most complicated *and *the most disastrous mistake they could, as a practical joke to see how much damage it could cause if believed. Welcome to ergodic finance, everybody. Leverage or not, the only thing that goes up in a crisis is correlation. In fact, leverage makes it worse.

While Peters limits himself to this area of finance as an especially egregious misapprehension of ergodicity, it is by no means the only example of cargo cult math in the industry. Most examples that come to mind easily for me all stem from a point alluded to above: the numbers in finance are not a closed game, but reflect things in the real world, where causation is most definitely a thing. To loop back to my worry about AI, which essentially stems from blindly trusting in ‘big data’, a good point to focus our suspicion is on statistics in finance in general.

**Statistics**

Now as with AI, I am not claiming that statistics cannot be done properly. My concern, as with AI, is that the ever-increasing multitudes of data make it ever-more appealing to do statistics improperly. Statisticians presumably know what they are doing, by and large. But the vast majority of applications of statistics outside mathematics and the natural sciences, as far as I can tell, is pseudo-intellectual nonsense. It often doesn’t even aspire to be meaningful; it is ‘bullshit’, which I use not as a crass insult, but in the philosophical sense developed by Harry Frankfurt that may be crisply captured as follows: *an untruth that is not a lie, and so is not intended to deceive, but rather without the speaker having any genuine interest in the truth value of the proposition, and so proposed to achieve some other goal entirely having nothing to do with conveying a meaning. *You have definitely encountered this. If anybody has ever said something like: “I think event x is y% likely,” they were almost certainly bullshitting; they were engaging in cargo cult math.

But before we get carried away, we should note that there are three reasons why maybe they were not. Firstly, they may have analytically derived a probability density function that describes the event space of x. Secondly, they may have constructed one synthetically by observing past happenings, but intuited that the future will not necessarily resemble the past and so used this distribution as an indication of a base rate to be deviated from, given the essentially different characteristics of the likely future to the known past. Thirdly, they were proposing odds for what they deemed to be a fair bet.

There is a fourth option that has shades of all of these, but unfortunately makes absolutely no sense: they may have been giving a sense of their ‘conviction’ in a prediction. This is a classic concept of cargo cult math. It really is a first-ballot hall of famer. Like all good cargo cult math (or science) it looks very much like a rigorous and serious thing to say. It’s a number. That’s a good start. If they are really sneaky, it will be a non-obvious but still fairly round number; nobody takes a 50% prediction seriously. But equally, nobody will take a 37.287% prediction seriously, because how could you possibly know something so precisely?

Pro tip: always predict things with 40% certainty. That way if the thing happens, you didn’t pick so low a number that your prediction seemed like you had a terrible conception of how likely it really was, but if it doesn’t happen then you’re still fine because you said it was less than 50% likely.

But my satirical advice only works if its audience is already committed to cargo cult math — it clearly has no real information content. What it sneakily elides is that a 40% prediction is the same as a 40.000% prediction, which is exactly as precise as 37.287% prediction. Clearly then, *any *prediction is spuriously precise. How can you possibly predict anything with any such level of precision?

There are three real rigorous ways, which I already outlined above. But I would suggest that it is extremely unlikely that this is what has actually happened. If the ‘conviction level’ is specifically a prediction about the future, then option 1 is almost certainly out because they don’t know enough about statistics to explain how they derived a probability density function. Option 2 can be copped out of also because ‘the future won’t be like the past’. But option 3 is your ace in the hole. If you think the probability of some outcome is x%, and that (1-x)/x : 1 is therefore a fair bet, then a price lower than x means odds better than those you deem to be fair. If you don’t take such a bet, then you don’t really believe your prediction.

As the economist Alex Tabarrok popularized, “*a bet is a tax on bullshit*.” This is one small way you can chip away at cargo cult math. If somebody makes a bullshit cargo cult math prediction, challenge them to a bet based on the odds implied. If they take the bet, that will actually be a very healthy sign, so be sure you aren’t bluffing to begin with or else it could get embarrassing. If there are prohibitions on gambling in whatever setting to which this approach might be desirous to implement, you can always do it for trivial amounts of money so that the point is more one of pride, or you could commit to full ‘pride monetization’ by computing Brier scores over a long period of prediction of unrelated events.

This isn’t perfect, by any means, but frankly anything that forces skin in the game — even reputational skin — is better than nothing, and will very probably cut down on the bullshit. Whatever the method, in the long run, you will help weed out pernicious cases of people who have no idea what they are talking about openly engaging in cargo cult math for some reason other than contributing to the correct prediction — professional prestige, for example, or misguided institutional architectures that pressure participants to make meaningless predictions they would rather not.

The statistical equivalent of the ‘risk-free rate’ is almost certainly that of ‘statistical significance’. Like ‘risk-free rate’ it has a sensible enough technical definition: that the probability of rejecting a true null hypothesis is higher than the probability of observing a value equal to or more extreme than the variate, given the null hypothesis is true. I won’t dwell on this for long as I am indebted to Stephen Ziliak and Deirdre McClosky and their brilliant, *The Cult of Statistical Significance*. As the reader can probably tell, they are not fans either. The two take aim at the *sizeless scientist *who cares only about establishing to an entirely arbitrary degree of certainty that an effect is real, and cares not at all for how big or small the effect is,

“*After Galton and Pearson, and especially after Ronald Fisher, the statistical sciences have slipped into asking a philosophical and qualitative question about existence instead. The scientific question is how much this particular bridge, or a bridge of this particular kind, can tolerate thus-and-such forces of stress. There may ‘exist’ a stable bridge. But unless the magnitude and limits of stability can be given quantitatively in the world we actually inhabit the knowledge of whether it exists is unhelpful. No astronomer is interested in the question of whether there is some effect of the rest of the galaxy’s gravitation on the Oort cloud. No scientific brewer of Guinness will ask whether bitterness ‘exists’ — as a careful student of hops chemistry, and a profit center, he is forced to ask how much. The question of whether has, as John Dewey observed, ‘no scientific meaning,’ no big bang. Being ‘exclusively theoretical’ no curiosity is aroused by it because it makes ‘no difference’ anywhere. Not even in philosophy, pragmatically considered, the great philosopher said.*”

The book’s subtitle is, *How the Standard Error Cost us Jobs, Justice and Lives*, and after laying out the theory, the later chapters of the book explore the widespread abuse of statistical illiteracy in economics, psychology, medicine, biology, law, and more. I know far less about these fields than I do about finance, and besides, they have already written the book, so I encourage the reader to go read it. Especially if they use statistics in their work.

The root of the problem I am most interested in is that statistical analysis of anything, ever, is only as useful as the samples available are representative of the population. Statistical significance is irrelevant to this determination. The problem described in various guises above is that the bullshitters have little conception of what such a representation could or should mean. But an even more insidious problem can easily creep into the work of the cargo cult mathematician. The bullshitter knows that what she is saying is bullshit. But the financier may well either believe his sample *is *representative of the population or, even worse, not think for long enough to realize that this condition is essential in the first place, possibly stopping after hitting statistical significance.

In finance, we are always implicitly assuming that the true population extends into the future, otherwise what’s the point? You can’t trade in the past! This creates a serious problem that the reader has probably realized: causation is real; creativity is real; people are not automatons and economic activity does not follow differential equations; therefore, no financial sample is *ever *representative of the population. Even if the sample is literally everything that has ever happened. It still won’t work. Thinking is required. By a human.

I’m going to be even more obnoxious about this in case the reader is unconvinced of how big a problem it is. I think that most people in the modern world simply have no idea what statistics is for or what it tells you. Especially in finance. Mathematical statistics developed in the late 19th century as a tool for physical science to solve one pretty generic problem which had countless specific instances: *how well does the observed data fit my mathematically articulated theory?*

Now, most worryingly outside of physical sciences, it is (mis)used to solve an utterly fallacious problem: *what mathematically articulated theory can be provided to fit the observed data? *This is a completely different question, and if you are even asking it to begin with then you very probably don’t understand statistics or math or science.

For starters, at no point in the second approach do you understand — or even attempt to understand — the phenomena you are observing. What you are actually doing is adopting a circa-seventeenth-century philosophy of science of mindless induction: what happened before is what will happen again. Why? Because our result is statistically significant. And what’s worse is you are perverting the most extraordinarily powerful explanatory toolkit ever devised to make it seem like what you are doing is legitimate science, making it even less likely that you will realize this mistake. “But I did statistics!” No, you did cargo cult math.

It was actually recently put to a group I was part of, in an entirely serious setting, that the reason we assign a probability of ½ to a coin landing heads is because that is what the data indicates, and *not *because of the symmetry of the coin. Like infinite returns, you really have to be quite clever to believe something so stupid. But this is not at all atypical. This is a perfect case study of exactly how AI will abuse meaningful statistics entirely to death.

I contemplated making a point along the following lines: what then do probability density functions *mean*? Take the Poisson distribution, for example. This tells us the probability that a number of independent but probabilistically identical events with known frequency will occur in a given timeframe. There are infinitely many Poisson distributions because each is parameterized by λ, the expected number of occurrences of a single event. λ is a real number, and so there is some λ — call it *p *— for which no Poisson distribution has ever been measured, what with our universe being finite and all. Were we to engineer an event that we know will behave according to Poisson-p could we predict anything about it? I say yes, because we understand statistics; they say no, because this has never happened before. We can’t possibly know anything. We have to wait and see.

There is a traditional teaching point about the purpose of theories and logic in science that is usually articulated as follows: the single most accurate theory is a list of every observation ever made. Why is it so good? Well, it correctly predicts every observation ever made! huzzah! The approach I am criticizing takes this (obvious joke) a step further by saying — take this theory of everything ever observed, do a linear regression, check for statistical significance, and then have a theory of everything that will have an answer for every question.

Consider this excerpt from John Kay’s wonderful book, *Other People’s Money*, the title of which unsubtly hints at why this behavior is so repugnant:

*“David Viniar, CFO of Goldman Sachs, claimed as the global financial crisis broke in August 2007 that his bank had experienced ’25 standard deviation events’ several days in a row. But anyone with a knowledge of statistics (a group that must be presumed to include Viniar) knows that the occurrence of several ’25 standard-deviation events’ within a short time is impossible. What he meant to say was that the company’s risk models failed to describe what had happened. Extreme observations are generally the product of ‘off-model’ events. If you toss a coin a hundred times and all the tosses are heads, you may have encountered a once in a lifetime statistical freak; but look first for a simpler explanation. For all their superficial sophistication, the masters of the universe had no real understanding of what was going on before them.”*

And,

*“Investors look at economic fundamentals; traders look at each other; ‘quants’ look at the data. Dealing on the basis of historic price series was once described as technical analysis, or chartism (and there are chartists still). These savants identify visual patterns in charts of price data, often favouring them with arresting names such as ‘head and shoulders’ or ‘double bottoms’. This is pseudo-scientific bunk, the financial equivalent of astrology. But more sophisticated quantitative methods have since proved profitable for some since the 1970s’ creation of derivative markets and the related mathematics.”*

By ‘mathematics’, Kay of course means cargo cult math (here is some real math, if the reader is at all interested) and by ‘the company’s risk models failed to describe what had happened’, Kay means, ‘they thought they were playing a numerical game, didn’t seem to realize that the numbers reflected happenings in the non-ergodic real world, and really just had absolutely no idea what they were doing.’ If, for whatever reason, you want to generate some 25 standard deviation events, applying some cargo cult math is probably a good start. David Swensen recounts a similar episode from some 15 years prior in his best-seller *Pioneering Portfolio Management**,*

*“Granite Capital’s David Askin suffered devastating losses while pursuing an apparently sophisticated strategy of identifying, hedging, and leveraging pricing anomalies in mortgage-backed security derivatives. Based on a record of generating mid-teens returns with low variability, Askin attracted funds from an impressive list of investors, including the Rockefeller Foundation and McKinsey & Company. Unfortunately, explicit and implicit leverage caused the firm’s entire $600m portfolio to be wiped out in a matter of weeks.*

*Askin invested in collateralised mortgage obligation derivatives with exotic names such as super-inverse interest-only strips. As unusual as the names might be, the securities simply represented pieces of pools containing ordinary home mortgages. By combining securities expected to respond in opposite ways to interest rate movements, Askin hoped to be hedged; when rates move, profits on one part of the portfolio would more or less offset losses on the other. If Askin correctly identified pricing anomalies, profits would exceed losses by a margin equal to the mispricing.*

*Because mispricings in fixed income markets tend to be small, investors frequently use leverage to magnify the portfolio impact of trades. At the time of his denouement, Askin employed leverage of approximately two and one half times, running positions of $2 billion on $600 million of equity.*

*Askin’s portfolio failed to weather the trauma caused by the Federal Reserve’s decision to increase interest rates in early 1994. What appeared to be well-hedged positions in a benign interest rate environment turned out to be wildly mismatched positions in a bearish bond market. As Askin’s portfolio accumulated losses, investment banks that lent money to fund the positions seized the bonds, selling positions to cover their exposure. The combination of poor portfolio structure and leverage led to Askin’s downfall.”*

Or, alternatively, Askin forgot that causation is real and that the population on which he was performing fantastic statistical analysis extended into the future, a strange place in which things are usually different to the present or past. Although Kay did also allude to the existence of real ‘investors’ above, I worry I may have inadvertently convinced the reader that actually nobody knows anything and finance is all a joke. David Swensen is one of — if not *the most* — renowned endowment managers in history, and this excerpt comes from a section on how *not* to* *do finance. Some people do know what they are doing. What a relief. Elsewhere in his (excellent) book, Swensen joyfully mocks the investment approach that inevitably results from a feeling of *having all the data*:

*“More succinct advice to those who must time markets comes from remarks attributed to a nineteenth-century cotton trader: ‘Some think it will go up. Some think it will go down. I do, too. Whatever you do will be wrong. Act at once.’”*

And so, the reader will hopefully be relieved to know that not all finance is a tragic cargo cult. But they will also be forgiven for having received that impression from the media. Production capital that takes years or even decades to give its long-term investors a handsome return is nowhere near as sexy as high-frequency trading in financial capital whose associated production nobody has time to look up. But it’s also why we don’t live in caves and hunt antelope, among other reasons. Cavemen presumably traded nuts and berries at relatively high frequencies, which achieved absolutely nothing in terms of long-term improvements in labor productivity. Amongst the most quoted passages of Keynes’ *General Theory *is this gem,

*“Of the maxims of orthodox finance none, surely, is more anti-social than the fetish of liquidity, the doctrine that it is a positive virtue on the part of investment institutions to concentrate their resources upon the holding of ‘liquid’ securities. It forgets that there is no such thing as liquidity of investment for the community as a whole. The social object of skilled investment should be to defeat the dark forces of time and ignorance which envelop our future. The actual, private object of the most skilled investment to-day is ‘to beat the gun’, as the Americans so well express it, to outwit the crowd, and to pass the bad, or depreciating, half-crown to the other fellow.”*

The image problem that finance faces stems from precisely the fact that almost everything is liquid. And if the thing itself isn’t, then some derivative on it probably is. Liquidity invites trading, trading generates data, and worship of such data quickly descends into cargo cult math. But it also invites excitement, and excitement makes for compelling narratives. This is more or less why, no matter their attempts at using logic and reasoning, of which this essay will be just one more futile attempt in a long history of failure, practitioners of legitimate and responsible finance never seem to make much ground in the broader public imagination. It is why even the word ‘quantitative’ is most habitually applied to ‘finance’, to which it least belongs, and as an afterthought to physics, chemistry, and biology, to which it is key. Luckily, there are some who are especially committed to turning this around …

**Complexity Science**

The fatal conceit of most cargo cult math is that because some system seems complex, any simplification of it is worthwhile. The problem is that a taxonomy that doesn’t touch on any underlying simplicity of structure is not a simplification at all, but an obfuscation. It makes things more complicated, not less. A relatedly amusing problem is that ‘complex’ actually has a meaning — like many terms cargo cult mathematicians imprecisely deploy. It describes a category of underlying conditions of a system that, as I outlined at the very beginning, fundamentally explains its macroscopic state. This is real mathematics. To end on a cheery note and avoid giving the reader the impression that everything is awful and everybody is deluded, I will lay out some of the most accessible results of this branch of mathematics, *complexity science.*

Most of the examples given come from the excellent *Scale**, *by Geoffrey West, but I stress that this is not a book review. Scale is a popular science book largely adapted from West’s own work, and that of his colleagues at the Santa Fe Institute, whereas I simply extract a few readily understood case studies of the kinds of insights that can be generated by these techniques. First, though, I highlight a quote that entirely coincidentally captures the frustration of this entire essay,

“*Although the term ‘social physics’ faded from the scientific landscape it has been resurrected more recently by scientists from various backgrounds who have started to address social science questions from a more quantitative analytic viewpoint typically associated with the paradigmatic framework of traditional physics. Much of the work that my colleagues and I have been involved in and which will be elucidated in some detail in later chapters could be described as social physics, although it is not a term an of us uses with ease. Ironically, it has been picked up primarily by computer scientists, who are neither social scientists nor physicists, to describe their analysis of huge data sets on social interactions. As they characterize it: ‘Social Physics is a new way of understanding human behavior based on analysis of Big Data.’ While this body of research is very interesting, it is probably safe to say that few physicists would recognize it as ‘physics,’ primarily because it does not focus on underlying principles, general laws, mathematical analyses, and mechanistic explanations.*”

Quite, Geoffrey, quite. You say, ‘*does not focus on underlying principles’*, I say, ‘*have no idea what they are talking about*,’ but I gave up any pretense of diplomacy pretty much at the beginning of the essay. If your lack of understanding of the ‘underlying principles’ of the subjective value of autonomous individuals leads you to employ ten times leverage on numerology, you are not simply ‘not a physicist’, you are a dangerous lunatic. Let’s focus on some underlying principles, then shall we?

As an example, consider the following intriguing property that holds within remarkably small errors across all mammals: the metabolic rate — energy expensed over time — is proportional to the mass of the mammal to the power of ¾ — also known as *Kleiber’s Law* (which actually applies to many more types of animals, but restricting it to mammals reveals yet further structure) For every doubling in the mass of a mammal, its metabolic rate goes up 75%, meaning there is a kind of 25% ‘efficiency savings’ with scale. A mammal 10,000 times bigger than another will consume only 1,000 times as much energy. And yet there is a smallest and largest mammal: the Etruscan shrew and the blue whale. Across the two dimensions of mass and metabolic rate, all mammalian life exists on one straight line (when plotted logarithmically) with two distinct endpoints. Why? West teases,

*“Is all of this solely arbitrary and capricious, the result of millions of tiny accidents and fluctuations in our long history that have been frozen in place by the process of natural selection, at least for the time being? Or is there some order here, some hidden pattern reflecting other mechanisms at work?*”

This certainly seems like a ‘hidden order’, but does it have a cause? What follows is about as concise an explanation of this phenomenon as I can manage. It does not at all do justice to what is around 100 pages of West’s book, but the impression I want to give is not really one about biology, but about what kinds of uses of mathematical thinking are real and legitimate, as opposed to stemming from a data-driven cargo cult. The short answer is that there are three mathematical principles at play: one optimization principle that follows from the fractal self-similarity of scaling up or down the circulatory system of mammals with differing masses; and two thresholds for kinds of ‘stress’ on the system imposed by other parts of its construction that are tolerable up to a certain bound.

The gist of the proportionality in scaling is as follows: biological building blocks are mostly, if not entirely, discrete, meaning the terminal units across all animals are invariant. Some consequences of this are obvious, such as blood cells having the same mass and volume across animals, but other consequences are less so, such as capillaries also having the same size across mice, men, and whales. But the increased mass of a whale requires far more capillaries, given that every cell must be (more or less) reached by the circulatory network. A whale is ten million times the mass of a shrew but it does not have ten million tiny shrew hearts. It has one heart that must service (the order of) ten million times as many capillaries. The network in-between must be *space-filling*, a technical term that thankfully means exactly what it sounds like.

This all suggests a kind of optimality in network design, in that there will in principle be *some kind* of circulatory system that uses optimal energy to move some minimum requirement of stored energy to every terminal unit. Any other system will either use more energy than necessary in delivery or won’t deliver enough energy to begin with. It is also reasonable to expect that, due to natural selection, something like the optimal system would dominate its peers and would be passed on, since *minimizing *energy used in this basic, involuntary, and life-sustaining, activity, *maximizes *the energy available for eating, fighting, and reproducing. The primary concern for maximizing the efficiency of a beating heart is to minimize the reflections produced by bloodflow at any branching points in the network. If the sum of the cross-sectional areas of daughter tubes is equal to that of the parent tube, there will in theory be no reflections at a branch point. But given the cross-sectional area of the tubes is (more or less) circular due to other optimization conditions, so there will always be a slight turbulence as the tube deforms to be able to branch. The simplest design that minimizes this turbulence is to split into two daughter tubes. These constraints combine to suggest a fractal geometry both within and across mammalian circulatory systems, such that the circulatory systems of larger mammals are ‘scaled up’ versions of those in smaller ones.

But scaled up *how*? The fractal geometry of the space-filling nature of the network is key. A ‘fractal dimension’ is, in the most general terms, a quantification of the complexity of a curve as revealed by the scale at which it is examined. West treats this part of the argument in a fairly fluffy manner in the book, saying that it is ‘as if the circulatory system earns an extra dimension to expand by’, thus scaling efficiently. I don’t say this to diminish the prose, but just to make sure the reader isn’t confused as to whether they were supposed to follow that or not. The more rigorous explanation is a bit much for this essay, but the interested reader can easily look up what is called the WBE model (West, Brown, Enquist). The short version, with no explanation of where this formula comes from, is that given the constraints of; mass scales linearly with terminal units, the distribution is hierarchical, vessels within the same level of the hierarchy are equivalent, the branching ratio is constant, the network is space-filling, and energy waste is minimized; then the scaling exponent for metabolic rate with mass, a, will be a = ln N / ln λ (β²), where β is the ratio of the radii of daughter branches to the radii of the parent branch, λ is the ratio of the sum of volumes of daughter branches to the volume of the parent branch, and N is the total number of capillaries in the network. Given we specified the network *bifurcates*, β is N^(-1/2) and λ is N^(-1/3), meaning the formula becomes a = — ln N / ln N ^ (-1/3) * (N ^ (-1/2)) ^ 2 = — ln N/ (-4/3) ln N = ¾ ln N / ln N = ¾.

The lower bound on mass comes from the simple fact that the viscosity of blood means that only a certain size of vessel can support the force of pulsatory waves produced by a beating heart. In fact, all mammals have very close to 15 levels supporting non pulsatile waves. But the terminal units are invariant, meaning that the 15 are all around the same size, after which there are successively damped pulsatile vessels leading back to the heart. This damping causes lost energy, and so in the spirit of minimizing energy usage, there must be at least a few levels of branching before the entirely non pulsatile region begins. The size of the terminal units, the density and viscosity of blood, the elasticity of arterial walls, and other such biological constants, can be used to derive the theoretically smallest mass of mammal, which happens to match very closely the Etruscan shrew, at just a few grams.

The upper bound comes from a different scaling law determining that the distance between the terminal units — in this case, capillaries — scales with mass with an exponent of 1/12. This is a relatively small exponent, and so a blue whale being 100 million times heavier than a shrew results in its capillaries being only 4.6 times further apart. However, capillaries are (more or less) *invariant*, meaning the same size of capillary has to deliver energy to more and more cells as mass increases. The distance that oxygen can diffuse across capillary walls and through tissue has a theoretical limit known as Krogh’s radius. As above, this, along with the mass of cells, and other such constants can be used to derive the theoretically largest mass of mammal, which happens to be very close to the blue whale, at 200 tons.

There are (at least) two fascinating things about these results. The more obvious, and the point of West’s book, is that understanding both the physical relevance of the biological constants of life and the mathematical interpretation of the mechanical systems life seems to have evolved produces incredibly powerful and accurate explanations for broad ranges of varieties of life. And by ‘powerful’ I do not at all mean ‘complicated’ but almost the exact opposite. The explanations are simple; they involve isolating constants that relate enormous ranges of observed data. But secondly, and more delightfully from my point of view and to this essay, is that these constants do not derive from learned machines crunching masses of data. In fact, they can be derived from no data whatsoever by *understanding *the conditions of the system producing the data in the first place. Good luck asking a statistical significance chasing hedge fund manager for that …

No, this is real mathematics, with nary a cargo cult in sight.

*Follow me on Twitter @allenf32*