You are here

Authenticate thyself

In the mid-1950s, IBM approached Jacques Perret, a Classics professor at the Sorbonne, with a question. They were about to sell a new kind of computer in France, the Model 650. What, they asked, should it be called? Not the model itself, but rather the whole class of device it represented. An obvious option was calculateur, the literal French translation of ‘computer’.

But IBM wanted something that conveyed more than arithmetic. ‘Dear Sir,’ Perret replied, How about ordinateur? It is a correctly formed word, which is even found in Littré [the standard 19th-century French dictionary] as an adjective designating God who brings order to the world. A word of this kind has the advantage of easily supplying a verb, ordiner … (My translation.)

Besides, Perret added, the implicitly feminine connotation already present in IBM’s marketing materials could carry over to the new term:

Re-reading the brochures you gave me, I see that several of your devices are designated by female agent names (trieuse, tabulatrice). Ordinatrice would be perfectly possible … My preference would be to go for l’ordinatrice electronique.

The female reference was not entirely inappropriate. Up until the mid-20th century, the term ‘computer’ meant an office clerk, usually a woman, performing calculations by hand, or with the help of a mechanical device. IBM’s new machine, however, was intended for general information-processing. The masculine and godlike version prevailed. The term soon entered common language. Every computer in France became known as an ordinateur.

The IBM Model 650 was sold from 1953 to ’62. Its Console Unit ran on about 2,000 vacuum tubes in conjunction with a spinning magnetic drum capable of storing about 20,000 digits. It was the size of a couple of large refrigerators. The accompanying power supply was about the same size again. The first one was installed in the controller’s office of the Old John Hancock Building in Boston, headquarters of the John Hancock Mutual Life Insurance Company, in 1954. Its job was to calculate sales commissions for the company’s 7,000 or so insurance agents.

The Model 650 was a big success. IBM initially thought they might make 50 of them; in the end they sold more than 2,000. Not terribly impressive numbers in comparison with iPhone sales, to be sure, but not bad for a device whose price started at about $1.5 million in current US dollars. What drove sales, to IBM’s initial surprise, was the variety of its commercial applications. It was put to work to run payrolls, manage inventory, analyse sales, and control costs. Its hardware was quickly augmented by the first commercial disk drive, the RAMAC, which could store about 3.75 megabytes of data. Unlike systems where storage was based fully on punch cards, it could access that data right away. This enabled a novel form of processing that happened in real time. Air traffic control was an early application. But it also allowed companies to look up client records immediately, and – for example – decide on the spot whether to extend a policy or a credit line. From then on, the breadth and depth of ‘computerised’ data endlessly expanded. The paper records and handbooks of the late 19th century became the relational databases of the late 20th.

The phone you are probably reading this on is a spiritual descendant of that machine. We are used to the idea that, over the past 70 years, computers have gotten faster and faster, smaller and smaller. Their real power, though, lies in how they link the ordinary choices we make into systems that constantly adjust and modulate themselves. Think, for example, of people deciding which restaurant to go to and how to get there. They choose with the assistance of Apple or Google Maps. The map shows their position, and many options for their destination. The locations all have descriptions and ratings attached, together with information on how busy the place is likely to be. Perhaps they will be offered a coupon or some other deal. Once a choice is made, the phone helps find the most effective route, monitoring the position of their car, receiving information about the general flow of traffic, the state of the weather, the presence of accidents or speed traps, and so on as they make their way.

Data about the flow of choices is used to update and enhance a system’s global view of its own state

 

In navigating this flow, drivers also constitute it. As transportation planners like to say, these commuters are not in traffic, they are traffic. Their phones track them individually while also aggregating information about the global state of things using data from thousands of beacons just like theirs. Some information from the resulting network’s-eye view is fed back to the user. This aids individual drivers, helping them choose the right route. But this information also modulates the overall system by prompting drivers as they make their individual decisions. Would you like to accept a faster route, or stay on your present course? A speed trap is reported ahead. The next off-ramp is temporarily closed. Sometimes this mode of control takes the form of reassurance: there is a 20-minute delay, but you are still on the fastest route. (Please do not do anything rash, like believing you know a shortcut.) The individual user relies on the information flowing through their phone to make their choices, and their decisions then feed back into the overall system.

What is true of people in their role as drivers also applies in their role as diners, and a thousand other social activities. Data about the flow of choices is used to update and enhance a system’s global view of its own state. Once the meal is done, the guests might decide to rate the restaurant, leave a review, or share a photograph of their dessert. If they left their car at home and took an Uber instead, they will have rated and been rated by their drivers. On the way home, they may check to see if the selfie they took at dinner has gotten any likes.

What is happening here is more than an abstract flow of information. It is more than a means of surveillance. It is more than a price mechanism. Rather, it’s as if the air traffic control and insurance commission functions of the IBM 650 have been fused, shrunk, and wholly generalised. This is the real computing revolution. Much of what we do is immediately authenticated as we do it, stored as data, classified or scored on some sort of scale, and deployed in real time to modulate some outcome of interest – usually, the behaviour of a person, or a machine, or an organisation.

Perret’s instinct to name the device for a being ‘who brings order to the world’ proved prescient. It is through their ability to observe, judge and manage people across social domains that les ordinateurs exert their most significant powers in society. Everywhere, the bureaucratic logic of organisations merges with the calculative logic of machines, feeding on the data emitted by ever-smaller and more powerful devices that ended up first in the homes, then on the laps, and then in the hands of billions of individuals. From this mass of information, ordinateurs spit out scores that create difference, define priorities, organise queues, and provide a tremendously useful and powerful basis for action. They create order by categorising people, things and ideas, and then by matching them to each other, to social positions, to goods, services and prices.

The resulting patterns are what we think of as social structure – a sort of ordinal society, where computer-generated outputs become guideposts for choices. In the economic sphere, for example, these methods help set wages and work schedules. They calculate rents, price insurance, and determine eligibility for social services. They facilitate new forms of rent-seeking, and accelerate the development of new asset classes that can be sold on financial markets. They have also changed the relationship between individuals and the groups they form and belong to. They organise the flow of information, the distribution of social influence, and the means of political mobilisation.

Our ability to form meaningful social bonds and to act together has been fundamentally altered

 

Because of this transformation, our sense of who we are is assembled in a strange and tangled fashion. The machinery of ordinalisation attends carefully to individuals rather than coarse classes or groups. By doing so, it appears to liberate people from the constraints of social affiliations and to judge them for their distinctive qualities and contributions. It promises incorporation for the excluded, recognition for the creative, and just rewards for the entrepreneurial. And yet this emancipatory promise is delivered through systems that classify, sort and, above all, rank people with ever-greater precision and on a previously unimaginable scale. The resulting social order is a sort of paradox, characterised by constant tensions between personal freedom and social control, between the subjective elan of inner authenticity and the objective forces of external authentication. It gives rise to a certain way of being, a new kind of self, whose experiences are defined by the push for personal autonomy and the pull of platform dependency.

Friedrich Hayek’s book The Road to Serfdom (1944) warned that government control of the economy would destroy individual freedom and inevitably lead to tyranny. Today’s predicament is different. The tyranny may come, instead, from digital platforms that enhance individualism and interpersonal competition to such a degree that our ability to form meaningful social bonds and to act together has been fundamentally altered. We are now travelling down a road to selfdom, where we must cultivate and attend to distinctive digital identities, develop our own understanding of the world, and hope to harness technology to carve out spaces of personal sovereignty and domination.

In the early days of the internet, being online brought certain freedoms. Not only was online anonymity or pseudonymity common, it was celebrated as a kind of liberation. Users embraced the opportunity to experiment with different versions of themselves. This multiplication of identities was a feature, not a bug. It also reflected the technical architecture of a less integrated internet, which gave participants what we might call ‘interstitial liberty’. This is the liberty granted us by the gaps between systems that will not or cannot efficiently talk to one another. It is a kind of negative freedom. If your gaming profile cannot easily be linked to your professional email or your forum discussions, you enjoy a form of privacy that depends less on explicit legal protections and more on the technical limitations of systems that are connected in principle but not integrated in practice. At other times, the preservation of these gaps is more of a choice. Until recently, in the United States, undocumented immigrants were safely able to work and contribute to the tax system. This deliberate administrative separation allowed US businesses and governments to benefit from immigrant labour while also creating a functional sanctuary by which millions could fulfil their tax obligations (using individual taxpayer identification numbers) without fearing it would trigger deportation proceedings. The Department of Government Efficiency (DOGE) has decided to change all that.

This is also the liberty that shrinks further when private companies are suddenly pressured to share their own data with public agencies, as happened in China around financial credit scoring, or when US government agencies decide to scrutinise the social media profiles of visa and citizenship applicants. Closing these technical gaps and fusing data from market and state institutions not only makes surveillance much more pervasive, it makes it more powerful. Tools that recognise patterns, predict behaviours and detect anomalies can now work across previously separate domains. Today, staying anonymous requires elaborate countermeasures, whether through legal instruments like the ‘right to be forgotten’ or technical solutions like private digital networks. However, in a world where digital presence is expected, protecting your privacy can make it look like you have something to hide. And perhaps you do. There are all sorts of potential embarrassments or vulnerabilities in the data about you. Proving one’s blamelessness is a near-impossible task.

Social connections that were sources of pride and support suddenly become potential liabilities

Beyond identifiability, the more insistent question is one of authentic identity: who are you, really? The ordinateurs want to know. To help us unlock this information, they have transformed it into a matter of public record, to be shared proudly and widely. Social media companies skilfully exploit our thirst for sociability and our romantic ideals of self-realisation. They relentlessly encourage individuals (and organisations, too) to publicly express their core commitments and enrol allies to validate them.

The compulsion to authenticity frequently backfires. Being exposed as inauthentic can be devastating to reputations and livelihoods. The sociologist Angèle Christin has described savage online battles between vegan influencers who push the envelope of vegan purity or expose their rivals as secret meat-eaters. Other authenticity traps are more ominous, as when organisations use social media feeds as public proof of who we truly are – an agitator, a gangster, a covert terrorist. In his book Ballad of the Bullet (2020), the ethnographer Forrest Stuart found big gaps between the performances that drill musicians put up for social media consumption and the more banal reality of their lives. Young people making themselves look tough to sell music on YouTube may learn the hard way that law enforcement officers and judges tend to interpret these signs literally, rather than seeing them as the status games and identity play that they most likely are. Similarly, the Trump administration’s reliance on tattoos as one easily harvested, measurable piece of evidence of gang membership takes an often superficial marker and turns it into a datapoint in a deportation scoring system. And in a country where the government has taken it upon itself to use people’s professed views against it in immigration proceedings, the effect is chilling. Self-disclosures and social connections that until recently were sources of pride and support suddenly become potential liabilities.

Authenticity traps multiply in other ways, too. Generative AI increasingly blurs the boundaries between real and synthetic texts, images and sounds. Traditional concerns about inauthentic or misinterpreted performances have given way to more fundamental questions about truth. Hopeful startups raise millions of dollars to develop ‘cheat on everything’ AI tools, and jobseekers can artificially generate their application materials and even fake their job interviews. All of this has the effect of shifting emphasis from authenticity to authentication, from demonstrating the truth of one’s identity to proving the reality of one’s testimony. The question is no longer whether an identity is genuine (‘Is that really you?’) or even authentic (‘Who is the real you?’) but whether each element of your digital presence is unmediated by artificial intelligence (‘Is it really you?’) This emergent regime of authentication transforms interactions from a set of performances to be judged into a series of actions to be verified by machines at every step.

Being a legitimate self now requires one to be publicly identifiable, authentic and, increasingly, fully authenticated. What began as a celebration of individual uniqueness that avidly encouraged the production of digital evidence is evolving into an elaborate system of verification that will treat any trace as a potentially suspect record. As fake versions of ourselves start to circulate, we may soon find ourselves caught in endless cycles of proving and defending the reality of our own existence, submitting ourselves more and more to a machinery of institutionalised scepticism that would have repulsed the early internet’s champions of identity play and experimentation.

The political and technical crises of authentication extend well beyond the individual self. Knowledge itself has massively expanded and diversified with the rise of the internet. But it has also become more bespoke and more parochial in the process, as people interact with the web in ways that build upon, and further elaborate, their own personal convictions and representation efforts. The advent of generative AI possibly worsens the epistemic challenge: when everything must be authenticated, but fakes get more sophisticated all the time, how do we know anything?

We live in an era of disintermediated knowledge. With the click of a button, we can look up original legal documents, download rich datasets, have an AI assistant help write code to analyse them, and quickly write up the results. We should not lose sight of how stunning and remarkably empowering a transformation this has been in many ways. For all its problems, if you asked any scholar whether they would go back to a fully pre-digital, pre-networked world of knowledge-sharing, academic communication and data availability, the answer would overwhelmingly be ‘absolutely not’. But this transformation in everyday work lives has been accompanied by something else. The disposition to search has quietly become second nature. Today, ‘doing one’s own research’ is more than a habit of academic professionals. It is a moral imperative, a civic command, and – a little like being a good driver – a skill everyone thinks they have.

Individuals equipped with the capacity to search the network and query large language model (LLM) oracles, and in possession of the self-confidence and the means to broadcast their findings, tend to become an authoritative source of opinion. At least that is how it feels to them. We can also understand why knowledge produced in this manner is often so emotionally charged. The more people invest in researching and developing their own understanding, the more their pursuit of knowledge transforms into a form of personal revelation, where everyone is both seeker and interpreter of their own truth. What began as an exercise in independent reasoning becomes a matter of belief, belief defended all the more passionately because it seems to have been self-discovered rather than externally given.

The very idea that we could arrive at a broadly accepted consensus on facts has grown more remote

In consequence, traditional hierarchies of knowledge and sources of expertise find themselves bypassed in favour of self-piloted algorithmic searches that generate a precisely ‘relevant’ answer to a query or a prompt, whether as a page of links or a summary paragraph of text. At its best, the ability to do this kind of work tends toward a kind of Deweyan democratic ideal, revitalising knowledge production as a participatory enterprise, making it work in a democratic spirit of open enquiry and collective truth-seeking. This is what many hoped for in the early days of the world wide web, up to and including the first waves of social media from blogging to Twitter. At its worst, though, the distribution of knowledge ends up channelled through platforms that mass-personalise results and promote engagement with extreme or misleading content because shock value and pandering are what produce advertising dollars. When the Canadian government in 2023 required internet companies to compensate media outlets for links to news published on their platforms, Meta simply blocked those links on Facebook and Instagram. The resulting information vacuum was quickly filled by unverified and Right-wing content, which helped prop up the local Trumpian candidate.

By now we are drowning in examples of tech platforms wielding market power to bend information ecosystems to their business needs, regardless of societal consequences. Google revolutionised search by, in effect, treating web pages as a huge reputational network. The relative authority of sites was set by a wider world of independent decisions to link or not to link to them. But the desire to tailor results to individual preferences increasingly became guided by the effectiveness of clickbait or ad placement. This has led to a fragmentation of the knowledge that people access, now produced to facilitate market manipulation by catering to pre-existing beliefs. While the sense of searching online as a form of active, critical thinking has persisted, for some, finding good information can be difficult. This makes the work of integrating a sense of shared reality much harder. Even the very idea that we could arrive at a broadly accepted consensus on facts – regardless of their content – has grown more remote. It is down these knowledge sinkholes that the sense of ‘selfdom’ gets further elaborated, sometimes in tragic ways, through the belief that the self is the only true source of its own enlightenment.

The combination of epistemological self-centredness and hyperconnectivity makes people susceptible to diffuse forms of ‘supersense’-making (to borrow a term from Hannah Arendt). Seeking some meaningful truth, people search for significant clues scattered across the internet, using commercial algorithms and recommender systems to connect the disparate pieces of information they venture upon into some sort of coherent worldview. What may begin as a playful existential quest can easily crystallise into reality-bending beliefs that thrive on and foster new social types and politically potent associations. At its peak, QAnon exemplified the interactions between the searching disposition, digital mediations and for-profit targeting. Its members saw themselves as critical thinkers uniquely equipped to discover hidden truths and interpret byzantine clues. They ferociously denied being part of a cult, since, as one of them put it to the researcher Peter Forberg, ‘no cult tells you to think for yourself.’

One might think that the advent of LLMs will counter these tendencies. Perhaps if properly integrated with a search engine, an LLM might distil vast amounts of information into coherent responses that do not pander. It can certainly provide seemingly authoritative summaries that look like answers, though its inner workings remain essentially opaque. But it is not clear whether such systems – even if they work as advertised – can solve the problem of reliable knowledge in a balkanised public sphere. Just as commercial incentives led to fake content and filter-bubbles, LLMs likely face the same pressures in a world of sharply diminishing returns. Because the firms training them desperately need to make money, the familiar business logics of personalisation and tiered benefits are likely to reassert themselves, with customised epistemic universes now served up by models catering to publics with particular tastes and different abilities to pay.

What happens when authenticated, epistemically egocentric selves enter the world of politics? If you are an authentic, self-directed individual, your greatest cultural fear is of being swallowed up by mass society, just as your greatest political fear is of surveillance by an authoritarian state. These fears are still very much with us. But in a world chock-full of socially recognised categories and authenticated identities, new dilemmas present themselves. On the individual side, everything – public behaviours, statements, metrics – can potentially become a source of difference, and thus of identity. On the organisational side, the data that users generate will lump or split them in increasingly specific, fleeting and often incomprehensible ways. The more precise social classifications are on either side or both, the more opportunities arise for moral distinctions and judgments.

The main casualty is the possibility of broad-based, stable political alliances. The more citizens are treated, individually, as objects of market intervention, the more disaggregated politics becomes. Traditional voter-targeting began with a political message and sought out individuals receptive to it. The rise of big data reverts this logic, starting from the cultural dispositions of electorates and building resonant messages from the ground up. Before Cambridge Analytica, Italy’s Five Star Movement (M5S) arguably pioneered this data-driven approach to politics. The essayist and novelist Giuliano da Empoli describes it in his book Les ingénieurs du chaos (‘The Engineers of Chaos’, 2019). The whole thing started in 2005, when a digital marketing specialist with a taste for direct democracy, Gianroberto Casaleggio, recruited a popular comedian and satirist, Beppe Grillo, to launch an eponymous blog to share his political disillusion and outrage with the public. The blog encouraged public participation, allowing Casaleggio – succeeded after his death in 2016 by his son Davide – to track resonant grievances and proposals through likes, comments and user feedback, and to test, tailor and refine Grillo’s political messaging. The result was the birth of the first ‘algorithmic party’, whose chaotic ideology was cobbled together from insights supplied by the data. Soon, ‘the people of the blog’ would be invited to migrate to the streets, supported by the social media infrastructure of another digital tool, the Meetup app. In 2018, the M5S became the largest party in Italy, helping form a short-lived coalition government.

A small cadre of ultra-wealthy men have reclaimed control over the state through a direct appeal to the masses

Modern political campaigns have evolved this approach into something more sophisticated and arguably more manipulative. Social media-generated data about cultural practices, emotions and dispositions on a wide range of subjects helps craft new narratives and aesthetics, reshape people’s informational environment and social connections, and activate their votes at strategic times. The desired political goal is typically achieved through an elaborate ‘persuasion architecture’ of personalised messaging and repeated exposure. For instance, advertising algorithms find patterns of successful actions (donations, likes, purchases, shares) and target similar users who can be induced to repeat these behaviours. Each iteration uses real-time response data to create an ever-more granular map of manipulable targets. Political mobilisation is, in effect, governed cybernetically through algorithms. Its operational logic emerges from constellations of variables that are hard to grasp all at once, giving the resulting political formations an emergent, ad hoc quality, somewhat independent from traditional mediating bodies like political parties and social movements.

The weakening of these conventional structures and the ability to individualise political messaging also produces highly personalised forms of social domination. Populist leaders thrive on perceptions that they have a direct connection to the public – even though this connection is often attended to by an entire ecosystem, a carefully constructed ‘propaganda feedback loop’. Owners of social media can even force this connection onto users via self-serving algorithmic manipulation, as Elon Musk and Donald Trump have both reportedly done on their respective platforms. Intoxicated by the ideal figure of the ‘sovereign individual’ unconstrained by national borders, social norms or the law, a small cadre of ultra-wealthy men have been able to reclaim control over the state and traditional elites through a direct appeal to the masses and to market freedom. Pushing the logic of sovereign individuality to its logical extreme, some are busy trying to carve out independent territories for themselves, complete with their own rules and possibly their own currencies. Others invest in revolutionising existing institutions from within. Until he withdrew from his government role, Musk hoped to oversee a radical remaking of the state as a centralised and largely automated computing infrastructure. Trump made himself into a digital token, offering his own status and reputation as an investment opportunity to enthusiastic followers and anyone striving for access, while his sons leveraged their father’s position to build (or attempt to build) a cryptocurrency empire.

Most people have to work harder. The digital economy is full of rags-to-riches stories built on the back of technical systems, encouraging every teenager to compete for and capitalise on quick fame. But the rules of these games are twisted. Money, time and social capital play a big role in propping up some individuals over others. This economy rides on payments for algorithmic boosters, ancillary supports for production and advertising, and connections to others, including bots, who can relay the message, both within and outside online spaces. The rise of self-branding is in part a mark of desperation, an ideological smokescreen masking the much bleaker social reality on the ground. As the deployment of digital technologies continues to generate ever-more stratospheric concentrations of wealth, the masses sink deeper into the void left by the evisceration of social solidarity and the rise of automation. The often-missed point about sovereign individuals is that not everyone gets to be one. But everyone should aspire to be one, and in the meantime follow one, as they walk down the road to selfdom.

When he wrote to IBM France in 1955, Jacques Perret had one slight reservation about his chosen name for the new machine:

The downside is that ordination refers to a religious ceremony [to ordain]; but the two fields of meaning (religion and computing) are so distant and the ordination ceremony known, I believe, to so few people that the inconvenience is perhaps minor. Besides, your machine would be ordinateur (and not ordination).

Professor Perret was more correct than he knew. In the 70 years since he baptised it, the descendants of the Model 650 have indeed taken on quasi-religious functions in modern society. Computers authenticate our souls and find our innermost truths. They shape our search for meaning in a disorienting and fragmented world. They foster new forms of political communion and sectarian schism. Above it all, stands the sovereign individual – the embodiment of modern selfdom, served by the ordinateur’s ruthless logic and its power, while it lasts, to manufacture gold out of bits.

Marion Fourcade