The value of uncertainty
Noonan’s North bar and grill is in an unassuming blue clapboard building, just off Main Street in the town of Holy Cross, Iowa (population 366). Locally, it’s popular for its BBQ ribs and all-you-can-eat shrimp nights. This is not the kind of place a person tends to drive hundreds of miles to visit. But Max Hawkins’s phone had told him to go there – and so go there he did.
Hawkins is a computer scientist (turned artist) who spent more than two years ‘living randomly’. His story began when, working as a Google engineer (his dream job) in San Francisco (his dream city), he realised that he had optimised his life to fit his preferences to what he suddenly found to be an alarming degree. He started every day at the stroke of 7am, went to the best coffeeshop, then cycled an optimal 15-minute route to work. A simple algorithm, fed with his GPS tracker data from one week, could predict with great accuracy his whereabouts and movements the next week at the same time of day. This smacked, he felt, of a certain lack of personal autonomy.
Despite having fit his life almost exactly to his preferences, he felt trapped – as if he had optimised his life to the point where his own role had become superseded. Hawkins responded by using new technologies to introduce greater variety into his life. For two years, he lived his life according to a series of randomisation algorithms. A diet generator told him what to eat, an algorithmic travel agent picked out the city where – having gone freelance – he would live for the next two months, a random Spotify playlist provided music for the journey, and a random Facebook-event selector told an Uber driver where to take him when he got there.
The algorithms took him to acrobatic-yoga classes in Mumbai and to a goat farm in Slovenia, but they also took him to the small-town pub of Holy Cross, Iowa, and to an eighth-grade flute recital, and to a small family Christmas in Fresno, California. Anywhere that would break him out of the comfortably predictable rut of the affluent San Franciscan tech worker. Reporting back from the frontiers of uncertainty in talks with titles such as ‘Leaning In to Entropy’, Hawkins said that the algorithms dictated not just where to go, what to eat and what leisure activities he should engage in, but even what clothes and hairstyles (he ended up needing several wigs) he should adopt. He even has a chest tattoo selected randomly from images on the web.
Hawkins reported finding great fulfilment in multiple unexpected ways, and feeling (paradoxically) more present as a person as a result of escaping what he had come to see as the dictatorship of his own preferences and preference-optimised lifestyle. He talked of escaping the tiny ‘bubbles’ of places to eat and things to do that kept on dragging him back time and time again.
Viewed from a certain perspective, Hawkins’s unease can seem strange, even paradoxical. Brains like ours are – according to a leading neuroscientific theory known as ‘predictive processing’ – designed to solve one basic puzzle: how to minimise their own long-term average surprise (prediction error) during our embodied exchanges with the world. The more volatile the environment, the less grip this core strategy gets, resulting in anxiety, stress and feelings of loss of control. Yet there was Hawkins, apparently adding huge doses of the unexpected into his life.
However, even from a predictive processing perspective, staying locally within the bounds of the expected is only one part of a much more complex story. For those very same predictive brains were designed to drive mobile, inquisitive creatures like ourselves. Such creatures must productively surf the waves of their own uncertainty. To do so, they probe and sample the world in ways that aim to reveal just where the key uncertainties lie, so that by future actions they can resolve them and move on. They seek new information, and they engage in complex rituals such as art and science whose role (we’ll argue) is in part at least to safely reveal and stress-test their own deepest assumptions.
Hawkins was actually doing something rather similar – stress-testing his own deepest assumptions about who he is and what he likes so as to more fully explore a space of human possibility. His methods were extreme, but his general project is both familiar and distinctively human. Creatures like us, it seems, have added some brand-new layers to our relationship with the space of our own predictions, errors and uncertainties, turning that space into a kind of concrete arena that affords deeper and more challenging explorations than those undertaken by most other living organisms. We have discovered ways of turning our own best models (including our self-model) into objects apt for explicit questioning.
The examined human life reflects, we suggest, a new kind of relationship with our own expectations and uncertainty. Yet it is one that we have somehow constructed within the inviolable bounds of a biologically bedrock drive to minimise long-term prediction error. How is this neat trick possible?
Understanding our own relationship with uncertainty has never been more important, for we live in unusually challenging times. Climate change, COVID-19 and the new order of surveillance capitalism make it feel as if we are entering a new age of global volatility. Where once for many in the West there were just pockets of instability (deep unpredictability) in a sea of reliability – albeit sometimes in disagreeable structures and expectations – it lately seems as if there are just pockets of stability in a swirling sea of hard-to-master change. By better understanding both the varieties and the value of uncertainty, and recognising the immense added value of turning our own uncertainties and expectations into concrete objects apt for test and challenge, we become better able to leverage the power of our own predictive brains.
The desire to escape a predictable life is a familiar theme in literature. Moby Dick’s Ishmael takes to the ocean, and Steppenwolf’s Harry Haller asserts that he ‘would rather feel the very devil burn in me than this warmth of a well-heated room’. In the counterculture classic The Dice Man (1971), the bored psychiatrist Luke Rhinehart – pseudonymous author of this fictionalised autobiography – entrusts his decisions to the roll of a dice, inspiring readers to do the same.
Yet the decision to open oneself up to total uncertainty, merely for its own sake, is far from the norm. Humans are creatures of habit. We eat the same thing for breakfast, take the same path to work, and meet the same people in the same pub for drinks afterwards. In 2010, a group of computer scientists analysed the mobility patterns of 50,000 anonymous cellphone users, and found it was possible to predict their future locations with 93 per cent accuracy based on past behaviours. Yet as we negotiate even our predictable daily worlds, we have no choice but to surf wave upon wave of uncertainty.
Not all uncertainties, however, are equal.
A handy taxonomy distinguishes expected uncertainty, unexpected uncertainty, and volatility. Expected uncertainty is task-salient uncertainty that is already predicted by an existing mental (generative) model – a set of structured knowledge that enables us to generate local predictions in ways nuanced by context and current task. Unexpected uncertainty arises when – for example – an environmental change causes us to become uncertain about our own generative model. Volatility is subtly different: it names a situation in which the frequency of changes in the environmental contingencies are themselves rapidly changing. Volatility is thus the most potentially anxiety-provoking species of uncertainty. It is uncertainty about the space of uncertainty itself.
Let’s start with a simple case of expected uncertainty. An agent who knows her way home from a certain location might navigate to that location even when aware that home actually lies in another direction. By reaching the familiar location, she collapses task-salient uncertainty and can proceed fluently to solve her real problem. The intervening action, which was designed to improve her state of information, is sometimes dubbed ‘epistemic’ (knowledge-seeking) as opposed to pragmatic (puzzle-solving).
When confronted with unexpected uncertainty, our brain reacts by increasing its learning rate
Work in predictive processing understands epistemic actions as actions selected to minimise expected future surprise (expected future prediction error). This hints at a deep and unexpected continuity between simple strategies such as returning to that familiar point, and much more complex, distinctively human, strategies, such as using Google Maps to help us find our destination from wherever we happen to be. What unites these superficially very different strategies is that each can be seen, in different ways and at different timescales, as a way of reducing salient expected uncertainty (hence future prediction error) by harvesting information from the local environment. The ability of prediction-error minimising systems to find solutions of this kind has now been demonstrated in multiple simulation studies too, including one in which simulated rats use a mixture of pragmatic and epistemic actions to find their way home, navigating first to familiar landmarks before exploiting their existing knowledge to find their way home.
More interesting cases of the same broad type arise when purely mental actions are performed in order to ‘think things through’ in advance of acting. All such actions are epistemic, and seek to improve our state of information prior to acting – but this time by making the best use of what we already know. For example, I can work out the number of radiators I need to buy to replace those in my living room by counting them in my mind’s eye. Imagination, nuanced by the use of selective attention, is a key means by which we engage in ‘vicarious trial and error’, whose role is to minimise expected future uncertainty by pressing new information from the existing model.
Expected uncertainty is experienced whenever we know the weak spots in our own state of information, and we can take remedial action accordingly. If I know that I don’t know the way home from where I am, and know how to reach the spot from which I can reliably succeed, then my uncertainty is fully expected, and (in that case) remediable. Or if I know that the dice is fair, then I know that no player, myself included, can know with confidence the outcome of an unbiased roll. This is expected and intransigent uncertainty of quite a familiar kind.
Unexpected uncertainty, by contrast, typically occurs when the environment (in effect, the rule structure) changes without our being in a position to predict that change. For example, suppose someone (without my knowledge) swapped the fair die for a loaded one, and play commenced. Now, my estimations of the other player’s uncertainty is suddenly rendered incorrect, and I will start to lose some serious money. Or suppose (to borrow an example from the neuroscientists Amy Bland and Alexandre Schaefer) I know that a certain restaurant has dishes I like about 80 per cent of the time, so that eight out of 10 visits will tend to yield a happy outcome. The uncertainty about the offerings of the day is then an expected uncertainty: one that I can work with as I plan my outings. By contrast, if the restaurant suddenly changes its chef, my estimates are immediately unreliable. I am thrust into the land of unexpected uncertainty – the scary land that, remarkably, Hawkins found so surprisingly satisfying.
Volatility means that there is little useful to learn at the target level except that things are apt to change
When confronted with unexpected (salient) uncertainty, our brain reacts by increasing its learning rate, encouraging the kinds of plastic change needed to update the predictive model – for example, by starting to learn about the typical menus created by that new chef. Over time, the upshot should be a revised model, one in which (let’s imagine) I expect dishes I like to be served only about five times during any 10 visits to the restaurant. This might be my cue to move into exploratory mode and try another restaurant.
This form of uncertainty can in fact be very beneficial for organisms like us. This is just the sort of uncertainty that can help us break out of bad habits and escape ‘local minima’ – good enough solutions that fall far short of what we might achieve by ‘pushing on through’.
This next (and final) kind of uncertainty is the most challenging of all. To creep up on it, notice that expected and unexpected uncertainties come in first- and second-order forms. First-order forms are about simple targets, such as the chances of getting the food we like. Second-order forms are about the first-order estimations themselves. I might be confident or unconfident about my estimation that most visits will yield a meal I like – for example, if I have been only a few times, or if others with similar tastes to my own seem to have had very different experiences.
But there are also environments that offer a special challenge to our predictive minds. Such environments make the second-order estimations unreliable. These are the so-called ‘volatile environments’ in which the statistical regularities themselves, when probed in my usual way, are unstable. In June 2020, COVID-19-related issues afflicting the world were joined by mass unrest and protests following the killing of George Floyd in Minneapolis – sudden dramatic shifts that are a paradigm of high volatility. Bland and Schaefer’s example is less dramatic, of a restaurant manager who decides to change the menu many times every season, even while the chef remains the same. The learning that ensues – either when I start to notice that my favourite dishes are less frequently available, or when I start to adapt to the COVID-19 crisis – is itself unreliable.
This is like playing dice with pieces that are sometimes fair, sometimes loaded one way, sometimes loaded another way, sometimes loaded yet another way, and so on. It is a world in which the frequency of change in the underlying ‘rule structure’ is high. This is a world that is highly resistant to informative learning, apart from learning that it is indeed such a world – and hence assigning lower confidence to all our estimates of target states.
Such environments are especially challenging. The failure to get a grip on expected (first order) uncertainty initially wants to drive us into plasticity and learning, and perhaps into an exploratory mode as we seek better and more stable environments. But the volatility means that this strategy is itself suspect, as there is little useful to learn at the target level except that things are apt to change. This is a situation that human and animal minds typically find extremely uncomfortable. It is, in fact, not unlike how the standard environment (and especially the social environment) might seem to those with autism spectrum condition, which has been theorised to involve overestimating the importance of small sensory deviations from expected patterns, hence estimating that the world itself is highly volatile and hard to predict.
Human experience, we believe, reflects nothing so much as the operation of predictions and uncertainty estimations along many dimensions and at many levels of processing. When all goes well, a wide range of predictions and estimations of their reliability (uncertainty) allow us to leverage everything we have been through, a whole life of experience and learning, to quickly detect those sensory patterns that matter to us, assess the reliability of our own expectations relative to the current sensory evidence, and (hence) to behave in ways that help bring about desired and beneficial patterns.
But there are dangers here too. Our predictions about the world can be mistaken or misled in various ways. Our hidden biases can sculpt how we perceive and behave in the world in ways that result in the world conforming to our mistaken view. In effect, making our mistake into a reality, which only reinforces our belief in that bias. Vicious cycles, such as these, in fact characterise many forms of functional (‘psychogenic’) illness and some forms of psychosis.
Hunger, homelessness, loneliness and chronic pain are all examples of situations and states that continually produce volatility (difficult-to-manage negative surprises). Sustained exposure to such volatile situations and environments – where the outcomes of actions appear inherently unpredictable – leads to an inevitable decrease in confidence in one’s ability to bring about the outcomes they expect. At that point, our predictive brains begin to infer an inability to exert successful control, and this then forms a damaging part of the model that guides our future actions.
This can result in a form of learned helplessness. Learned helplessness occurs when an animal is exposed to an adversive outcome that’s unavoidable. Many studies (rather cruelly) involve rats receiving electric shocks without the chance of escape. What is striking is that, even if an avenue of escape (a door) is made available, after a certain point the rat won’t even try to escape. It has learned that it is helpless, that it is unable to behave in ways that will avoid these adversive outcomes. The computational basis of states such as learned helplessness is understood in predictive processing (in the field of ‘computational psychiatry’) as an ingrained expectation of being unable to control outcomes, due to the environment being too volatile to predict and navigate successfully. Beliefs about levels of control and ability to avoid adverse outcomes that might underpin adaptive behaviour in one environment (when the door is closed) don’t necessarily carry over to other environments (when it opens).
Addictive substances exploit a different vulnerability, again allowing suboptimal cycles to take hold. Opioids ‘hijack’ reward circuitry in the brain that predictive processing researchers take to compute estimations of the rate of prediction-error minimisation – that is, the brain’s estimation of how well it is doing at reducing prediction error compared with its expected rate of reduction in the current context. Hijacking this process means that the brain gets fooled into estimating that it is doing far better than expected at reducing salient (high precision) error. Importantly, any context where this occurs gets marked as one that we will seek to inhabit. Over time, the repeated ability of the drug to induce this effect leads to an (aptly named) ‘habit’ of use – a habit whose grip is not loosened simply by the person realising that the drug is not, in fact, delivering pleasure, success or fulfilment. This is because, despite that high-level judgment, in the moment of use all those hidden estimations of unexpectedly good error-reduction become active. It’s important to note that the predictive brain here is not malfunctioning, it is doing just what it has evolved to do – reduce uncertainty. However, the brain didn’t evolve to manage the sorts of signals presented by drugs of addiction.
Addicted predictors can create a personal niche in which elements incompatible with their model are excluded
It might seem hard to understand how such clearly pernicious habits could sustain themselves indefinitely. Opioid addiction is manifestly not conducive to human flourishing. How can a (non-malfunctioning) predictive brain continue to sustain a model in which feeding this habit is, at any level, cast as positive, in defiance of a wealth of evidence to the contrary? What is the evolutionary utility of the predictive strategy if it allows our models to remain so persistently disconnected from our external situation?
To fully understand the self-reinforcing power of such habits, we need to look once more beyond the brain. We need to attend to how the process of acting to minimise surprise ensnares our environment into the overarching error-minimising process. At the simplest level, such actions might just involve ignoring immediate sources of error – as when alcoholics preserve the belief that they are functioning well by not looking at how much they’re regularly spending on drink. But our actions can also have a lasting effect on the structure of our environment itself, by moulding it into the shape of our cognitive model. Through this process, addicted predictors can create a personal niche in which elements incompatible with their model are excluded altogether – for instance, by associating only with others who similarly engage in, and thus do not challenge, their addictive behaviours.
This mutually reinforcing circularity of habit and habitat is not a unique feature of substance addiction. In 2010, the internet activist Eli Pariser introduced the term ‘filter bubble’ to describe the growing fragmentation of the internet as individuals increasingly interact only with a limited subset of sources that fit their pre-existing biases. Pariser laid the blame for these bubbles on the increasing use of predictive algorithms by big tech companies to deliver up exactly the sort of content an individual has interacted with in the past. From the perspective of the predictive brain, such personalisation technologies look less like a radical new development than an extension to the predictive algorithms we subconsciously run on ourselves in an effort to keep our environmental interactions within easily anticipatable bounds.
Indeed, the 21st century’s proliferation of digital enclaves might be less attributable to new technological impositions than to how the weakening of such geographical, social and political constraints has freed each individual to construct a personal environment in ever more specific ways. As the journalist Bill Bishop argued in The Big Sort (2008), by charting the movement of US citizens into increasingly like-minded neighbourhoods over the past century, this homophilic drive has long directed our movements through physical space. In the online world, it now occurs through more than a million subreddits and innumerable Tumblr communities serving everyone from queer skateboarders to incels, flat-Earthers and furries.
The environment that each surprise-minimising brain interacts with is not some shared and stable domain of regularities that can be relied upon as a neutral check to keep all of our predictive models in line. Instead, it is recast as a flexible resource for extracting and creating just those regularities that we predict. Perversely, the more flexible the environment, the more it allows for the creation of self-protective bubbles and micro-niches, and hence affords the entrenchment of rigid models.
The many ways that we can fall prey to our own predictive brains correspond to the various ways in which we can become trapped by our own estimations of the reliability of different predictions. Hawkins felt trapped by his own optimisations – precise high-level predictions that he would live a life of a certain kind. But by noticing the restricted shape of that life, he was eventually able to break the grip of his own self-model. This suggests that there are ways to hack our own predictive minds so as to escape at least some of the traps we have been examining. This is true in cases where people yearn for a more varied and engaging life. But similar principles underpin some of the most devastating psychopathologies, where, just like in the case of learned helplessness, ingrained expectations about the volatility and unpredictably of the environment are resistant to revision even when agents find themselves in more favourable environmental circumstances. A host of psychological and affective disorders such as major depression, anxiety, addiction and post-traumatic stress disorder (PTSD) can be broadly understood within those terms.
A more radical means of loosening these entrenched predictive models, which has been used for thousands of years in cultures across the world, is psychedelic drugs. After years of suppression of psychedelic research – documented by Michael Pollan in How to Change Your Mind (2018) – it’s only in the past decade that such research has really taken off. Evidence is emerging that psychedelics could offer a powerful new means to treat a range of affective disorders, including addiction, obsessive-compulsive disorder, PTSD, treatment-resistant depression, as well as being effective at easing ‘existential distress’ in end-of-life care. Non-clinical populations stand to benefit too – with feelings of increased ‘nature-relatedness’, ecological awareness and reduced anxiety.
These drugs are known to induce profound alterations in phenomenology, from sensory perception, mood and thought (including the perception of reality), and even ‘dissolving’ the usual sense of self in a way that strikingly resembles the loss of self in the Buddhist notion of nirvana. The writer, philosopher and psychedelic pioneer Aldous Huxley puts it like this in The Doors of Perception (1954):
To be shaken out of the ruts of ordinary perception, to be shown for a few timeless hours the outer and the inner world, not as they appear to an animal obsessed with survival or to a human being obsessed with words and notions … [this is] an experience of inestimable value …
Huxley conceptualised psychedelics using a metaphor that now seems strikingly prescient. He thought of the brain as a ‘reducing valve’ – stripping out the enormous amount of information in sensory input down to just a trickle that allows for adaptive interface with the environment. Under psychedelics, the valve is opened, and this filtration process is temporarily suspended. Predictive processing formalises this intuition.
On the predictive processing view, predictions are essentially compressive, where higher levels of the hierarchy – dealing with more abstract or invariant features of reality, and tracking longer timescales – strip the sensory signal of redundancies: that is, anything that’s not relevant to adaptive action. Under psychedelics, the ‘reducing valve’ is opened. From a predictive processing perspective, these drugs can be understood as loosening the grip of engrained and rigid expectations about how sensory signals are caused, allowing the brain to create novel hypotheses about the world and how we relate to it.
Psychedelics can be seen as putting the brain into a ‘hot state’ of temporary malleability
There is burgeoning evidence that these drugs, when used properly – with ample attention to ‘set and setting’ – could offer a lasting means of cultivating a new mode of interfacing with the world for both clinical and nonclinical populations. The ‘relaxation’ of high-level beliefs that constrain and construct ordinary perception results in lower levels being, to some extent, ‘freed’ from their constraining influence of compressive predictions. The striking perceptual effects (‘tripping’) can be understood as lower-levels rampantly overfitting best guesses to this rich yet newly puzzling sensory stream as a result of being less constrained by the high-level influence – it’s as if the brain cycles through many more hypotheses to explain away the incoming sensory signal and make sense of the current sensory influx.
Metaphorically, then, psychedelics can be seen as putting the brain into a temporary ‘hot state’. An analogy can be made to annealing a metal – heating up so as to introduce a state of temporary malleability. When put in the hot state by psychedelics, the brain becomes malleable enough for its prediction-generating models to be remoulded. Under the right conditions – with proper consideration of context and post-experience integration – this has the potential to be profoundly therapeutic. Thinking of this in terms of putting the brain into a highly plastic, sensitive state gives an idea of how psychedelics can underpin profoundly therapeutic experiences, at the same time as it emphasises the crucial importance of responsible usage, set and setting.
For example, there is now a large body of evidence showing that psychedelic drugs such as lysergic acid diethylamide (LSD), psilocybin (the active factor in magic mushrooms) and even dimethyltryptamine (DMT– the ‘spirit molecule’ found in ayahuasca) can, when used carefully, play an important role in addressing many forms of treatment-resistant chronic depression. To see how this fits in with our picture, notice that people with severe depression have often formed a precise expectation of their own behaviours and responses – a predictive self-model that actively inhibits the joyful or playful exploration of their worlds, and which acts as a self-fulfilling prophecy of powerlessness and retreat. By inducing a temporary ‘hot’ state, psychedelic drugs seem able to intervene our own high-level self-model, freeing us up to encounter the world and our sense of being in new and often helpful ways. These windows on other ways of being are not lost when the immediate effects of the drug subside but can instead mediate an empowering re-examination of our own life, goals and sense of self, other and nature.
Psychedelics offer a means of remoulding ingrained expectations about – for instance – volatility in our environment. Other ways of hacking our own predictive brains includes the deliberate installation of helpful expectations, as seen in the use of so-called ‘honest placebos’ (when patients know that the drug they’re taking is a placebo). In these cases, predictions of relief seem to be activated regardless of the person knowing perfectly well that there is no active ingredient present. Honest (or ‘open-label’) placebos have proven effective in cases ranging from irritable bowel syndrome to cancer-related fatigue. And the higher the estimate of their power, the greater the effect – inert substances delivered by syringe are typically more effective than those delivered by pill, presumably because we automatically estimate this as a more powerful form of intervention.
All these are, however, fairly blunt interventions on our own predictions and (self) expectations. Less blunt interventions include structured practices such as meditation and mindfulness, which serve to retrain our own constraining expectations (eg, of permanence). A useful tool here is training attention to actively drop our sampling towards the sensory edge, and away from our set beliefs, disengaging the expectation of high-level stability itself. When this succeeds, we can live in the moment while still dealing with the changing contingencies of daily life. Such practices reflect something important yet easy to miss – our very human ability to turn our own mental states into objects for reflection and action. This brings us full-circle to Hawkins, and that key thread in the process that led to his extreme attempt to see beyond his normal web of expectations about his daily life.
Hawkins’s algorithms were designer tools pushing him out of his routine envelope and allowing new patterns and experiences to emerge. He deliberately crafted them to achieve a certain goal – the goal of breaking the mould of his highly optimised lifestyle. The key to this kind of radical action lies in a process that, though seldom remarked upon, seems to us to lie at the very heart of much that is distinctively human about our relationship with the space of uncertainty. The two-step process starts by making our own predictive models and associated expectations visible, turning them into objects inspectable by ourselves and others. It then proceeds by devising tricks, schemes and ploys that can stress, challenge and sometimes productively break those models.
The first step comes ‘for free’ with symbolic culture. Spoken language, written text and a host of associated practices turn aspects of our own generative models into public (material) objects – words, books, diagrams – apt for sharing, refinement and multigenerational transmission. When Hawkins speaks of feeling trapped by the predictability of his own highly optimised weekly routines, he makes visible the way that his own model of ‘a good life’ is delivering a stream of choices and experiences that he now finds unsatisfying. This is a remarkable feat, and it is worth dwelling on.
Flexible symbolic codes, once in place, enable us to step back from our own generative models and model-based predictions, turning them into public objects apt for questioning, stress-testing and deliberate ‘breaking’. It strikes us as extremely unlikely that the majority of nonhuman animals ever succeed at making their own life-models visible to themselves. Their models guide actions, but they are not themselves the objects of actions. But once in command of symbolic codes, the floodgates are open, and our own models and model-based expectations can become objects of scrutiny. This might be the single most transformative epistemic bonus conferred by material culture.
But there is more to come. For once our own best models are encountered as objects, we can do more than simply scrutinise them. We can take actions designed to break and rebuild the models themselves. The deliberate use of psychedelic drugs to ease the grip of our own high-level self-model belongs in this category. So do certain meditative practices that explicitly aim to relax the tyranny of our expectations of permanence. So does Hawkins’s creation of all those randomising life-algorithms.
Perhaps most notably of all, our own artistic, engineering and scientific practices often play just this kind of role. For example, diagrams, descriptions and scale models enable us independently to manipulate different aspects of a design, and to selectively attend to the different elements. This enables us to explore different outcomes as conditioned upon different choices in ways that ease the bonds of our own model-based expectations – much like physically shuffling a bunch of Scrabble tiles so as to help uncover new words. To enable such operations, my current beliefs and models need to exist as more than probabilistic trends in the way that I navigate the world on the basis of stored knowledge. They need to exist as concrete items apt for attention, sharing and questioning.
When uncertainty management goes awry, we can too easily lose our grip on self, world and other
Art is often in this model-revealing/model-breaking business too. It can be a way of materialising and confronting our own high-level assumptions, about self, world and other, but doing so in a framework that steps back from daily concerns (think of being in a theatre watching the play Death of a Salesman) and hence is not usually experienced as genuinely threatening, even if it is subversive. Theoretical science, even more clearly, aims to codify our own best models of how minds and worlds work, delivering up these as objects for sharing, scrutiny and ‘productive breaking’.
Whatever the story, human minds became able to go where no animal minds had gone before. We became able to encounter our own predictive models as objects. Hawkins set out to break the grip of his own life-model. But it is notable that there is still a predictable regime in play, and one that he himself understands (indeed, one that he designed). For example, he knew that the algorithm would send him somewhere new every two months, and that it wouldn’t first send him suddenly to a new town or city every week, then (randomly) every day, then (randomly) not for 10 years.
It is interesting to speculate that it was this lack of volatility that enabled him to gain so much from his experiment, while avoiding the kinds of anxiety and fear that many of us recently felt as COVID-19 first began to turn the pattern of our lives upside down. Predictive brains expect control and, if it fails, they drive learning. Normally, the detection of high volatility in the environment should drive learning and exploration. Yet, under lockdown conditions, we were (rightly) told to stay put and do nothing.
This is very odd for us. One response was to take control of small worlds – baking, jigsaws, exercise. This is very similar to a response already seen in individuals with autism spectrum condition, which is to generate and inhabit a more controlled environment. And it is a good response, a way of restoring some sense of mastery in the face of wider volatility. Impressive bodies of work in ‘computational psychiatry’ are now devoted to better understanding our relationship with uncertainty, and the many ways it can go wrong. We humans are, it seems, uncertainty management systems – and when uncertainty management goes awry, whether due to external or internal perturbations, we can all too easily lose our grip on self, world and other.
Perhaps the most revealing comment in Hawkins’s many talks is one made towards the end of ‘Leaning In to Entropy’. He remarks on how rapidly the strangest and most ‘non-him’ situations and places became the ‘new normal’, so much so that he could easily imagine life as that person in that once-alien place. This, we conjecture, is the predictive brain reasserting itself, reforming aspects of our own high-level self-model so as to get a grip on the new normal.
Hawkins’s takeaway message was simple: don’t let your own preferences become a trap. Yet on a kind of meta level, he remained trapped (in a good way) – his randomising algorithm simply fulfilling his new top-level preference for selecting in ways that sidestepped his first-order preference structure. We can’t help but intuit some kind of value in this experiment. Like art and science, it makes the invisible concrete, revealing the strong gravitational force of our own expectations.
It is also an object lesson in the surprising value of controlled uncertainty.
Mark Miller, Kathryn Nave, George Deane, Andy Clark