Then & Now https://www.thenandnow.co/ Human(itie)s, in context Wed, 01 May 2024 20:08:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 214979584 How AI is Being Stolen https://www.thenandnow.co/2024/04/26/how-ai-is-being-stolen/ https://www.thenandnow.co/2024/04/26/how-ai-is-being-stolen/#respond Fri, 26 Apr 2024 14:55:18 +0000 https://www.thenandnow.co/?p=1080 This is a story about stolen intelligence. It’s a long but necessary history, about the deceptive illusions of AI, about Big Tech goliaths against everyday Davids. It’s about vast treasure troves and mythical libraries of stolen data, and the internet sleuths trying to solve one of the biggest heists in history. It’s about what it […]

The post How AI is Being Stolen appeared first on Then & Now.

]]>
This is a story about stolen intelligence. It’s a long but necessary history, about the deceptive illusions of AI, about Big Tech goliaths against everyday Davids. It’s about vast treasure troves and mythical libraries of stolen data, and the internet sleuths trying to solve one of the biggest heists in history. It’s about what it means to be human, to be creative, to be free. And what the end of humanity – post-humanity, trans-humanity, the apocalypse even – looks like.

It’s an investigation into what it means to steal, to take, to replace, to colonise and conquer. Along the way we’ll learn what AI really is, how it works, and what it can teach us about intelligence – about ourselves – turning to some historical and philosophical giants along the way.

Because we have this idea that intelligence is this abstract, transcendent, disembodied thing, something unique and special, but we’ll see how intelligence is much more about the deep, deep past and the far, far future, something that reaches out powerfully through bodies, people, the world.

Sundar Pichai, CEO of Google, was reported to have claimed that, ‘AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire’. We’ll see how that might well be true. It might change everything dizzyingly quickly – and like electricity and fire, we need to find ways of making sure that vast, consequential and truly unprecedented change can be used for good – for everyone – and not evil. So we’ll get to the future, but it’s important we start with the past.

 

Contents:

 

A History of AI: God is a Logical Being

Intelligence. Knowledge. Brain. Mind. Cognition. Calculation. Thinking. Logic. 

We often use these words interchangeably, or at least with a lot of overlap, and when we do drill down into what something like ‘intelligence’ means, we find surprisingly little agreement.

Can machines be intelligent in the same way humans can? Will they surpass human intelligence? What does it really mean to be intelligent? Commenting on the first computers, the press referred to them as ‘electronic brains’.

 

Manchester, England

There was a national debate in Britain in the fifties around whether machines could think. After all, a computer in the fifties was in many ways already many times more intelligent than any human.

The father of both the computer and AI, Alan Turing, contributed to the discussion in a BBC radio broadcast in 1951, claiming that ‘it is not altogether unreasonable to describe digital computers as brains’.

This coincidence – between computers, AI, intelligence, and brains – strained the idea that AI was one thing. A thorough history would require including transistors, electricity, computers, the internet, logic, mathematics, philosophy, neurology, society. Is there any understanding of AI without these things? Where does history begin?

This ‘impossible totality’ will echo through this history, but there are two key historical moments: The Turing Test and the Dartmouth College Conference.

Turing wrote his now famous paper – Computing Machinery and Intelligence – in 1950. It began with: ‘I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’.’

He suggested a test – that, for a person who didn’t know who or what they were conversing with, if talking to a machine was indistinguishable from talking to a human then it was intelligent. 

Ever since, the conditions of a Turing Test have been debated. How long should the test last? What sort of questions should be asked? Should it just be text based? What about images? Audio? One competition – the Loebner Prize – offered $100,000 to anyone who could pass the test in front of a panel of judges.

As we pass through the next 70 years, we can ask: has Turing’s Test been passed?

 

New Hampshire, USA

A few years later, in 1955, one of the founding fathers of AI, John McCarthy, and his colleagues, proposed a summer research project to debate the question of thinking machines.

When deciding on a name McCarthy chose the term ‘Artificial Intelligence’.

In the proposal, they wrote, ‘an attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves’.

The aim of the conference was to discuss questions like could machines ‘self-improve’, how neurons in the brain could be arranged to form ideas, and to discuss topics like creativity and randomness, all to contribute to research on thinking machines. The conference was attended by at least twenty now well-known figures, including the mathematician John Nash.

Along with Turing’s paper, it was a foundational moment, marking the beginning of AI’s history.

But there were already difficulties that anticipated problems the field would face to this day. Many bemoaned the ‘artificial’ part of the name McCarthy chose. Does calling it artificial intelligence not limit what we mean by intelligence? What makes it artificial? What if the foundations are not artificial but the same as human intelligence? What if machines surpass human intelligence?

There were already suggestions that the answer to these questions might not be technological, but philosophical. 

Because despite machines in some ways being more intelligent – making faster calculations, less mistakes – it was clear that that alone didn’t account for what we call intelligence – something was missing.

The first approach to AI, one that dominated the first few decades of research, was called the ‘symbolic’ approach.

The idea was that intelligence could be modelled symbolically by imitating or coding a digital replica of, for example, the human mind. If the mind has a movement area, you code a movement area, an emotional area, a calculating area, and so on. Symbolic approaches essentially made maps of the real world in the digital world. 

If the world can be represented symbolically, AI could approach it logically.

For example, you could symbolise a kitchen in code, symbolise a state of the kitchen as clean or dirty, then program a robot to logically approach the environment – if the kitchen is dirty then clean the kitchen.

McCarthy, a proponent of this approach, wrote:The idea is that an agent can represent knowledge of its world, its goals and the current situation by sentences in logic and decide what to do by [deducing] that a certain action or course of action is appropriate to achieve its goals.’

It makes sense because both humans and computers seem to work in this same way.

If the traffic light is red then stop the car. If hungry then eat. If tired then sleep.

The appeal to computer programmers was that approaching intelligence this way lined up with binary – the root of computing – that a transistor can be on or off, a 1 or 0, true or false. Red traffic light is either true or false, 1 or 0, it’s a binary logical question. If on, then stop. It seems intuitive and so building a symbolic, virtual, logical picture of the world in computers quickly became the most influential approach.

Computer scientist Michael Wooldridge writes that this was because, ‘It makes everything so pure. The whole problem of building an intelligent system is reduced to one of constructing a logical description of what the robot should do. And such a system is transparent: to understand why it did something, we can just look at its beliefs and its reasoning’.

But a problem quickly emerged. Knowledge turned out be to far too complex to be represented neatly by these logical simple true-false if-then rules. One reason is the shades of uncertainty. If hungry then eat is not exactly true or false. There’s a gradient of hunger.

But another problem was that calculating what to do from these seemingly simple rules required much more knowledge and many more calculations than first assumed. The computing power of the period couldn’t keep up.

Take this simple game: The Towers of Hanoi. The object is to move the disks from the first to the last pole in the fewest number of moves without placing a larger disk on top of a smaller one.

We could symbolise the poles, the disks, and each possible move and the results of each possible move into the computer. And then a rule for what to do depending on each possible location of the disks. Relatively simple.

But consider this. With three disks this game is solvable in 7 moves. For 5 disks it takes 31 moves. For 10, its 1023 moves. For 20 disks, 1,048,575 moves. For 64 disks, if one disk was moved each second it would take almost 600 billion years to complete the game.

In AI this problem was named combinatorial explosion. That as you increase the number of possible actions for each action, the number of factors, the possible combinations, the complexity, becomes incomprehensibly vast and technologically difficult. 

And the Tower of Hanoi is a simple game. Combinatorial explosion became even more of a problem with games like chess or go. And a human problem like driving is infinitely more complicated. The red light is on or off but it might be broken, it might have graffiti or snow on, pedestrians might walk out regardless, a child run across – and that’s just the first element of a vast environment, the impossible totality.

Approaching AI this way was called ‘ ’ – that for each move, the computer had to search through each possible scenario and every bit of information to decide what to do.

In robotics, a similar approach was being taken, and proving even more complicated.

The development of AI coincided with technological advances in other areas: infrared, radars, cameras, microphones, batteries.

In 1971, Terry Winograd at MIT was developing a program called SHURDLU that aimed to symbolically model something he called blocks world.

In this virtual world, a user could ask the program to manipulate the blocks in different ways:

Person: Pick up a big red block.
Computer: OK.
Person: Grasp the pyramid.
Computer: I DON’T UNDERSTAND WHICH PYRAMID YOU MEAN.
Person (changing their mind): Find a block which is taller than the one you are holding and put it into the box.
Computer: BY “IT”, I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING.
Computer: OK.

A year later, researchers at Stanford built a real life blocks world.

SHAKEY was a real robot that had bump censors called ‘cats whiskers’ and laser range finders to measure distance.

The robotics teams ran into similar problems as in the Towers of Hanoi problem. The environment was much more complicated than it seemed. The room had to be painted in a specific way for the sensors to work properly.  

The technology of the time could not keep up, and combinatorial explosion, the complexity of any environment, became such a problem that the 70s and 80s saw what’s now referred to as the AI winter.

 

History of AI: The Impossible Totality of Knowledge

By the 70s, some were beginning to make the case that something was being left out: knowledge.  The real world is not towers of Hanoi, robots and blocks – knowledge about the world is central. However, logic was still the key to analysing that knowledge. How could it be otherwise?

For example, if you want to know about animals, you need a database:

IF animal gives milk THEN animal is mammal
IF animal has feathers THEN animal is bird
IF animal can fly AND animal lays eggs THEN animal is bird
IF animal eats meat THEN animal is carnivore

Again, this seems relatively simple, but even an example as basic as this requires a zoologist to provide the information. We all know that mammals are milk-producing animals, but there are thousands of species of mammal and a lot of specialist knowledge. As a result, this approach was named the ‘expert systems’ approach. And it led to one of the first big AI successes.

Researchers at Stanford used this approach to work with doctors to produce a system to diagnose blood diseases. It used a combination of knowledge and logic.

If a blood test is X THEN perform Y.

Significantly, they realised that the application had to be credible if professionals were ever going to trust and adopt it. So MYCIN could show its workings and explain the answers it gave.

The system was a breakthrough. At first it proved to be as good as humans at diagnosing blood diseases.

Another similar system called DENDRAL used the same approach to analyse the structure of chemicals. DENDRAL used 175,000 rules provided by chemists.

Both systems proved that this type of expert knowledge approach could work. 

The AI winter was over and significantly, research began attracting investment.

But once again, expert system developers encountered a new serious problem. The MYCIN database very quickly became outdated.

In 1983, Edward Feigenbaum, a researcher on the project, wrote, ‘The knowledge is currently acquired in a very painstaking way that reminds one of cottage industries, in which individual computer scientists work with individual experts in disciplines painstakingly[..]. In the decades to come, we must have more automatic means for replacing what is currently a very tedious, time-consuming, and expensive procedure. The problem of knowledge acquisition is the key bottleneck problem in artificial intelligence’.

Because of this, MYCIN was not widely adopted. It proved expensive, quickly obsolete, legally questionable, and difficult to establish with doctors widely enough. Logic was understandable – but the collecting and the logistics of collecting knowledge was becoming the obvious central problem.

In the 80s, influential computer scientist Douglas Lenat began a project that intended to solve this.

Lenat wrote: [N]o powerful formalism can obviate the need for a lot of knowledge. By knowledge, we don’t just mean dry, almanack like or highly domain-specific facts. Rather, most of what we need to know to get by in the real world is… too much common-sense to be included in reference books; for example, animals live for a single solid interval of time, nothing can be in two places at once, animals don’t like pain… Perhaps the hardest truth to face, one that AI has been trying to wriggle out of for 34 years, is that there is probably no elegant, effortless way to obtain this immense knowledge base. Rather, the bulk of the effort must (at least initially) be manual entry of assertion after assertion’.

The goal of Lenat’s CYC project was to teach AI all of the knowledge we usually think of as obvious. He said: ‘an object dropped on planet Earth will fall to the ground and that it will stop moving when it hits the ground but that an object dropped in space will not fall; a plane that runs out of fuel will crash; people tend to die in plane crashes; it is dangerous to eat mushrooms you don’t recognize; red taps usually produce hot water, while blue taps usually produce cold water; … and so on’.

Lenat and his team estimated that it would take 200 years of work, and they set about laboriously entering 500,000 rules on taken-for-granted things like bread is a food or that Isaac Newton is dead.

They quickly ran into problems. The CYC project’s blind spots were illustrative of how strange knowledge can be.

In an early demonstration, it didn’t know whether bread was a drink or that the sky was blue, whether the sea was wetter than land, or whether siblings could be taller than each other.

These simple questions reveal something under-appreciated about knowledge. Often, we don’t explicitly know something ourselves yet despite this the answer is laughably obvious. We might not have ever thought about the question is bread a drink or is it possible for one sibling to be taller than another, but when asked, we implicitly, intuitively, often non-cognitively just know the answers based on other factors.

This was a serious difficulty. No matter how much knowledge you entered, the ways that knowledge is understood, how we think about questions, the relationships between one piece of knowledge and another, the connections we draw on, are often ambiguous, unclear, and even strange.

Logic struggles with nuance, uncertainty, probability. It struggles with things we implicitly understand but also might find difficult to explicitly explain.

Take one common example you’ll find in AI handbooks:

Quakers are pacifists.
Republicans are not pacificists.
Nixon is a Republican and a Quaker.

Is Nixon a pacifist or not? A computer cannot answer this logically with this information. It sees this as a contradiction. While a human might explain the problem with this in many different ways, drawing on lots of different ideas – uncertainty, truthfulness, complexity, history, war, politics.

The big question for proponents of expert-based knowledge systems like CYC – which still runs to this day – is whether complexities can ever be accounted for with this logic based approach.

Most intelligent questions aren’t of the if-then, yes-no, binary sort, like: is a cat a mammal? 

Consider the question ‘are taxes good?’ It’s of a radically different kind than ‘is a cat a mammal?’. Most questions rely on values, depend on contexts, definitions, assumptions, are subjective.

Wooldridge writes: ‘The main difficulty was what became known as the knowledge elicitation problem. Put simply, this is the problem of extracting knowledge from human experts and encoding it in the form of rules. Human experts often find it hard to articulate the expertise they have—the fact that they are good at something does not mean that they can tell you how they actually do it. And human experts, it transpired, were not necessarily all that eager to share their expertise’.

But CYC was on the right path. Knowledge was obviously needed. It was a question of how to get your hands on it, how to digitise it, and how to label, parse, and analyse it. As a result of this, McCarthy’s idea – that logic was the centre of intelligence – fell out of favour. The logic-centric approach was like saying a calculator is intelligent because it can perform calculations, when it doesn’t really know anything. More knowledge was key.

The same was happening in robotics. 

Australian roboticist Rodney Brooks, an innovator in the field, was arguing that the issue with simulations like Blocks World was that it was simulated and tightly controlled. Real intelligence didn’t evolve in that way, and so real knowledge had to come from the real world.

He argued that perhaps intelligence wasn’t something that could be coded in but was an ‘emergent property’ – something that emerges once all of the other components were in place. That if artificial intelligence could be built up from everyday experience, genuine intelligence might develop once other more basic conditions had been met. In other words, intelligence might be bottom up, arising out of the all of the parts, rather than top-down, imparted from a central intelligent point into all of the parts.  Evolution, for example, is bottom up, slowly adding to single cell organisms more and more complexity until consciousness and awareness emerges.

In the early 90s, Brooks was head of the Media Lab at MIT and rallied against the idea that intelligence was a disembodied, abstract thing. Why could a machine beat any human at chess but not pick up a chess piece better than a child, he asked? Not only that, the child moves the hand to pick up the chess piece autonomically, without any obvious complex computation going on in the brain. In fact, the brain doesn’t seem to have anything like a central command centre – all of the parts interact with one another, more like a city than like a pilot flying the entire things. 

Intelligence was connected to the world, not cut off, ethereal, transcendent, and abstract.

Brooks worked on intelligence as embodied – connected to its surroundings through sensors and cameras, microphones, arms and lasers. The team built an insectoid robot called Cog. It had thermal sensors, microphones, but importantly no central control point. Each part worked independently but interacted together – they called it ‘decentralised intelligence’.

It was an innovative approach but never could quite work. Brooks admitted Cog lacked ‘coherence’. 

And by the late 90s, researchers were realising that computer power still mattered.

In 1996, IBM’s chess AI – Deep Blue – was beaten by grandmaster Gary Kasparov.

Deep Blue was an expert knowledge system – it was programmed with the help of chess players not just by calculating each possible move, but by including things like best opening moves, concepts like ‘lines of attack’, or ideas like choosing moves based on pawn position.

But IBM also threw more computing power at it. Deep Blue could search through 200 million possible moves per second with its 500 processors.

It played Kasparov again in 1997. In a milestone for AI, Deep Blue won. At first, Kasparov accused IBM of cheating, and to this day maintains foul play of a sort. In his book, he recounts an episode in which a chess player working for IBM admitted to him that: ‘Every morning we had meetings with all the team, the engineers, communication people, everybody. A professional approach such as I never saw in my life. All details were taken into account. I will tell you something which was very secret[…] One day I said, Kasparov speaks to Dokhoian after the games. I would like to know what they say. Can we change the security guard, and replace him with someone that speaks Russian? The next day they changed the guy, so I knew what they spoke about after the game’.

In other words, even with 500 processors and 200 million moves per second, IBM may still have had to program in very specific knowledge about Kasparov himself by listening in to conversations – this, if maybe apocryphal, was at least a premonition of things to come…

 

The Learning Revolution

In 2014, Google announced it was acquiring a small relatively unknown 4-year-old AI lab from the UK for $650 million. The acquisition sent shockwaves through the AI community.

DeepMind had done something that on the surface seemed quite simple: beaten an old Atari game.

But how it did it was much more interesting. New buzzwords began entering the mainstream: machine learning, deep learning, neural nets.

What those knowledge-based approaches to AI had found difficult was finding ways to successfully collect that knowledge. MYCIN had quickly become obsolete. CYC missed things that most people found obvious. Entering the totality of human knowledge was impossible, and besides, an average human doesn’t have all of that knowledge but still has the intelligence researchers were trying to replicate. 

A new approach emerged: if we can’t teach machines everything, how can we teach them to learn for themselves?

Instead of starting from having as much knowledge as possible, machine learning begins with a goal. From that goal, it acquires the knowledge it needs itself through trial and error.

Wooldridge writes, the goal of machine learning is to have programs that can compute a desired output from a given input, without being given an explicit recipe for how to do this’.

Incredibly, DeepMind had built an AI that could learn to play and win not just one Atari game, but many of them, all on its own.

The machine learning premise they adopted was relatively simple.

The AI was given the controls and a preference: increase the score. And then through trial and error, it would try different actions, and iterate or expand on what worked and what didn’t. A human assistant could help by nudging it in the right direction if it got stuck.

This is called ‘reinforcement learning’. If a series of actions led to the AI losing a point it would register that as likely bad, and vice versa. Then it would play the game thousands of times, building on the patterns that worked.

What was incredible was it didn’t just learn the game, but quickly became better than the humans. It learned to play 29 out of 49 games at a level better than a human. Then it became superhuman.

This is the often demonstrated one. It’s called Breakout. Move the paddle, destroy the blocks with the ball. To the developers’ surprise, DeepMind learned a technique that would get the ball at the top so it would bounce around and destroy the blocks without having to do anything. It was described as spontaneous, independent, and creative. 

Next, DeepMind beat a human player at Go, commonly believed to be harder than chess, and likely the most difficult game in the world.

Go is deceptively simple. You take turns to place a stone, trying to block out more territory than your opponent while encircling their stones to get rid of them.

AlphaGo was trained on 160,000 top games and played over 30 million games itself before beating Lee Sedol in 2016.

Remember combinatorial explosion. This was always a problem with Go. Because there are so many possibilities it’s impossible to calculate every move. 

Instead, DeepMind’s method was based on sophisticated guessing around uncertainty. It would calculate the chances of winning based on a move rather than calculating and playing through all the future moves after each move. The premise was that this is more how human intelligence works. We scan, contemplate a few moves ahead, reject, imagine a different move, and so on.

After 37 moves in the match against Sedol, the AlphaGo made a move that took everyone by surprise. None of the humans could understand it, and it was described as ‘creative’, ‘unique’ and ‘beautiful’, as well as ‘inhuman’, by the professionals.

The victory made headlines around the world. The age of machine learning had arrived.

 

What Are Neural Nets?

In 1991, two scientists wrote, ‘The neural network revolution has happened. We are living in the aftermath.’ 

You might have heard some new buzzwords thrown around – neural nets, deep learning, machine learning. I’ve come to believe that this revolution is probably the most historically consequential we’ll go through as a species. It’s fundamental to what’s happening with AI. So bear with me, jump on board the neural pathway rollercoaster, buckle up and get those synapses ready, and, we’ll try and make this as pain free as possible.

Remember that that symbolic approach we talked about tried to make a kind of one-to-one map of the world. And that, instead, machine learning learns itself through trial and error. AI mostly does this using neural nets.

Neural nets are revolutionising the way we think about not just AI, but intelligence. They’re based on the premise that what matters are connections, patterns, pathways.

Artificial neural nets are inspired by neural nets in the brain.

Both in the brain and in AAN, you have basic building blocks of neurons or nodes. The neurons are layered. And there are connections between them.

Each neuron can activate the next. The more neurons that are activated, the stronger the activation of the next, connected neuron. And if that neuron is firing strong enough, it will pass a threshold and fire the next neuron. And so on billions of times.

In this way intelligence can make predictions based on past experiences.

I think of neural nets – in the brain and artificially – as something like, ‘commonly travelled paths’. The more the neurons fire, the most successfully, the more their connections strengthen. Hence the phrase, ‘those that fire together wire together’.

So how are these used in AI?

First, you need a lot of data. You can do this in two ways. You can feed a neural net a lot of data – like adding in thousands of professional go or chess games. Or you can play games over and over, on many different computers, thousands of times. Peter Whidden has a video that shows an AI playing 20,000 games of Pokémon at once.

Ok, so once you have lots of data, the next job is to find patterns. If you know a pattern, you might be able to predict what comes next. 

ChatGPT and others are large language models – meaning they’re neural networks trained on lots of text. And I mean a lot. ChatGPT was trained on around 300 billion words of text. If you’re thinking ‘whose words’ you might be onto something we’ll get to shortly.

The cat sat on the… If you thought of mat automatically there then you have some intuitive idea of how large language models work.

Because, in 300 billion words’ worth of text, that pattern comes up a lot. ChatGTP can predict that’s what should come next.

But what if I say the cat sat on the… elephant?

Remember that one of the problems previous approaches ran into was that not all knowledge is binary, on or off, 1 or 0? Not all knowledge is like, ‘if an animal gives milk then it’s a mammal’.

Neural networks are particularly powerful because they avoid this, and can instead work with probability, ambiguity, and uncertainty. Neural net nodes, remember, have strengths. All of these neurons fire and so fire mat, but these other neurons still fire a little bit. If I ask for another random example it can switch up to elephant. If it’s looking at patterns after the words ‘heads’, ‘or’, ‘tails’, the successive nodes are going to be pretty evenly split, 50/50, between heads and tails. 

If I ask ‘are taxes good?’ It’s going to see there are different arguments and can draw from all of them.

Kate Crawford puts it like this: ‘they started using statistical methods that focused more on how often words appeared in relation to one another, rather than trying to teach computers a rules-based approach using grammatical principles or linguistic features’.

The same applies to images. 

How do you teach a computer that an image of an A is an A or a 9 is a 9? Because every example is slightly different. Sometimes they’re in photos, on signposts, written, scribbled, at strange angles, in different shades, with imperfections, upside down even. If you feed the neural net millions of drawings, photos, designs of a 9 it can learn which patterns repeat until it can recognise a 9 on its own.

The problem is you need a lot of examples. In fact, this is what you’re doing when you fill in those reCAPTCHA’s – you’re helping Google train its AI.

There are some sources in the description if you want to learn more about neural nets. This video by 3Blue1Brown on training numbers and letters is particularly good.

Developer Tim Dettmers describes deep learning like this: ‘(1) take some data, (2) train a model on that data, and (3) use the trained model to make predictions on new data’.

The neural network revolution has some ground-breaking ramifications. First, intelligence isn’t this abstract, transcendental, ethereal thing, connections between things are what matters, and those connections allow us and AI to predict the next move. We’ll get back to this. But second, machine learning researchers were realising, for this to work, they needed a lot of knowledge, a lot of data. It was no use getting chemists and blood diagnostic experts to come into the lab once a month and laboriously type in their latest research. Plus it was expensive.

In 2017 an artificial neural net could have around 1 million nodes. The human brain has around 100 billion. A bee has about 1 million too, and a bee is pretty intelligent. But one company was about to smash past that record, surpassing humans as they went.

By the 2010s, fast internet was rolling out all over the world, phones with cameras were in everyone’s pockets, new media and information broadcast on anything anyone wanted to know. We were stepping into the age of big data.

AI was about to become a teenager.

 

OpenAI and ChatGPT

Silicon Valley

There’s a story – likely apocryphal– that Google founder Larry Page called Elon Musk a speciesist because he preferred to protect human life over other forms of life, privileged human life over potential artificial superintelligence. If AI becomes better and more important than humans then there’s really no reason to prioritise, privilege, and protect humans at all. Maybe the robots really should take over.

Musk says that this caused him to worry about the future of AI, especially as Google, after acquiring DeepMind, was at the forefront of AI development.

And so, despite being a multi-billion dollar corporate businessman himself, Musk became concerned that AI was being developed behind the closed doors of multi-billion dollar corporate businessmen.

In 2015 he started OpenAI. Its goal was to be the first to develop general Artificial Intelligence in a safe, open, and humane way.

AI was getting very good at performing narrow tasks. Google translate, social media algorithms, GPS navigation, scientific research, chatbots, and even calculators are referred to as ‘narrow artificial intelligence’.

Narrow AI has been something of a quiet revolution. It’s already slowly and pervasively everywhere. There are over 30 million robots in our homes and 3 million in factories. Soon everything will be infused with narrow AI – from your kettle and your lawnmower to your door knobs and shoes.

The purpose of OpenAI was to pursue that more general artificial intelligence – what we think of when we see AI in movies – intelligence that can cross over from task to task, do unexpected, creative things, and act, broadly, like a human does.

AI researcher Luke Muehlhauser describes artificial general intelligence – or AGI as it’s known – as ‘the capacity for efficient cross-domain optimization’, or ‘the ability to transfer learning from one domain to other domains’.

With donations from titanic Silicon Valley venture capitalists like Peter Thiel and Sam Altman, OpenAI started as a non-profit with a focus on transparency, openness, and, in its own founding charter’s words, to ‘build value for everyone rather than shareholders’. It promised to publish its studies and share its patents and, more than anything else, focus on humanity.

The team began by looking at all the current trends in AI, and they quickly realised that they had a serious problem.

The best approach – neural nets and deep machine learning – required a lot of data, a lot of servers, and importantly, a lot of computing power. Something their main rival Google had plenty of. If they had any hope of keeping up with the wealthy big tech corporations, they’d unavoidably need more money than they had as a non-profit.

By 2017, OpenAI decided it would stick to its original mission, but needed to restructure as a for-profit, in part, to raise capital.

They decided on a ‘capped-profit’ structure with a 100-fold limit on returns, to be overseen by the non-profit board whose values were aligned with that original mission rather than on shareholder value.

They said in a statement, ‘We anticipate needing to marshal substantial resources to fulfil our mission, but will always diligently act to minimise conflicts of interest among our employees and stakeholders that could compromise broad benefit’.

The decision paid off. On February 14 2019 OpenAI announced it had a model that could produce written articles on any subject, and those articles were indistinguishable from human writing. However, they claimed it was too dangerous to release.

People assumed it was a publicity stunt.

In 2022, they released ChatGPT – a LLM AI that seemed to be able to pass, at least in part, the Turing Test.

You could ask it anything, it could write anything, it could do it in different styles. It could pass many exams, and by the time it got to ChatGPT4 it could pass SATs, the law school bar exam, biology, high school maths, the sommelier, and medical licence exams.

ChatGPT attracted a million users in five days.

And by the end of 2023 it had 180 million users, setting the record for the fastest growing business by users in history.

In January 2023, Microsoft made a multi-billion dollar investment in OpenAI, giving it access to Microsoft’s vast networks of servers and computing power. Microsoft began embedding ChatGPT into Windows and Bing.

But OpenAI has suspiciously become ClosedAI, and some began asking, how did ChatGPT know so much? Much that wasn’t exactly free, open and available on the legal internet. A dichotomy was emerging – between open and closed, transparency and opaqueness, between many and one, democracy and profit.

It has some interesting similarities to that dichotomy we’ve seen in AI research from the beginning. Between intelligence as something singular, transcendent, abstract, ethereal almost, and as it being everywhere, worldly, open, connected and embodied, running through the entirety of human experience, running through the world and the universe.

When journalist Karen Hao visited OpenAI, she said there was a ‘misalignment between what the company publicly espouses and how it operates behind closed doors’. They’ve moved away from the belief that openness is the best approach. Now, as we’ll see, they believe secrecy is required. 

 

The Scramble for Data

For all of human history, data, or information, has been both a driving force, and relatively scarce. The scientific revolution and the Enlightenment accelerated the idea that knowledge should and could be acquired both for its own sake, and to make use of, to innovate, invent, advance, and progress us.

Of course, the internet has always been about data. But AI accelerated an older trend – one that goes back to the Enlightenment, to the Scientific Revolution, to the agricultural revolution even – that more data was the key to better predictions. Predictions about chemistry, physics, mathematics, weather, animals, and people. That if you plant a seed it tends to grow.

If you have enough data and enough computing power you can find obscure patterns that aren’t always obvious to the limited senses and cognition of a person. And once you know patterns, you can make predictions about when those patterns could or should reoccur in the future.

More data, more patterns, better predictions.

This is why the history of AI and the internet are so closely aligned, and in fact, part of the same process. It’s also why both are so intimated linked to the military and surveillance. 

The internet was initially a military project. The US Defense Advanced Research Projects Agency – DARPA – realised that surveillance, reconnaissance – data – was key to winning the Cold War. Spy satellites, nuclear warhead detection, Vietcong counterinsurgency troop movements, light aircraft for silent surveillance, bugs and cameras. All of it needed extracting, collecting, analysing.

In 1950, a Time magazine cover imagined a thinking machine as a naval officer. 

Five years earlier, before computers had even been invented, famed engineer

Vannevar Bush wrote about his concerns that scientific advances seemed to be linked to the military, linked to destruction, and instead conceived of machines that could share human knowledge for good.

He predicted that the entirety of the Encyclopaedia Britannica could be reduced to the size of a matchbox and that we’d have cameras that could record, store, and share experiments.

But the generals had more pressing concerns. WWII had been fought with a vast number of rockets and now that nuclear war was a possibility, these rockets need to be detected and tracked so that their trajectory could be calculated and they could be shot down. As technology got better and rocket ranges increased, this information needed to be shared across long distances quickly. 

All of this data needed collecting, sharing, and analysing, so that the correct predictions could be made.

The result was the internet.

Ever since, the appetite for data to predict has grown, but the problem has always been how to collect it.

But by the 2010s, with the rise of high speed internet, phones, and social media, vast numbers of people were uploading TBs of data about themselves willingly for the first time.

All of it could be collected for better predictions. Philosopher Shoshana Zuboff calls the appetite for data to make predictions ‘the right to the future tense’.

Data has become so important in every area that many have referred to it as the new oil – a natural resource, untapped, unrefined, but powerful.

Zuboff writes that ‘surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data’.

Before this age of big data, as we’ve seen, AI researchers were struggling to find ways to extract knowledge effectively.

IBM scanned their own technical manuals, universities used government documents and press releases. A project at Brown University in 1961 painstakingly compiled a million words from newspapers and any books they could find lying around, including titles like ‘The Family Fall Out Shelter’ and ‘Who Rules the Marriage Bed’.

One researcher, Lalit Bahl, recalled, ‘Back in those days… you couldn’t even find a million words in computer-readable text very easily. And we looked all over the place for text’.

As technology improved so did the methods of data collection.

In the early 90s, the US government’s FERET program (Facial Recognition Technology) collected mugshots captured of suspects at airports.

George Mason University began a project photographing people over several years in different styles under different lighting conditions with different backgrounds and clothes. All of them, of course, gave their consent.

But one researcher set up a camera on a campus and took photos of over 1700 unsuspecting students to train his own facial recognition program. Others pulled thousands of images from public webcams in places like cafes.

But by the 2000s, the idea of consent seemed to be changing. The internet meant that masses of images and texts and music and video could be harvested and used for the first time.

In 2001, Google’s Larry Page said that, ‘Sensors are really cheap.… Storage is cheap. Cameras are cheap. People will generate enormous amounts of data.… Everything you’ve ever heard or seen or experienced will become searchable. Your whole life will be searchable’.

In 2007, computer scientist Fei-Fei Li began a project called ImageNet that aimed to use neural networks and deep learning to predict what an image was.

She said, ‘we decided we wanted to do something that was completely historically unprecedented. We’re going to map out the entire world of objects’.

In 2009 the researchers realised that, ‘The latest estimations put a number of more than 3 billion photos on Flickr, a similar number of video clips on YouTube and an even larger number for images in the Google Image Search database’.

They scooped up over 14 million images and used low wage workers to label them as everything from apples and aeroplanes to alcoholics and hookers. 

By 2019, 350 million photographs were being uploaded to Facebook every day. Still running, ImageNet has organised around 14 million images into over 22,000 categories.

As people began voluntarily uploading their lives onto the internet, the data problem was solving itself.

Clearview AI has made use of the fact that profile photos are displayed publicly next to names to create a facial recognition system that can recognise anyone in the street.

Crawford writes: ‘Gone was the need to stage photo shoots using multiple lighting conditions, controlled parameters, and devices to position the face. Now there were millions of selfies in every possible lighting condition, position, and depth of field’.

It has been estimated that we now generate 2.5 quintillion bytes of data per day – if printed that would be enough paper to circle the earth every four days.

And all of this is integral to the development of AI. The more data the better. The more ‘supply routes’, in Zuboff’s phrase, the better. Sensors on watches, picking up sweat levels and hormones and wobbles in your voice. Microphones in your kitchen that can hear the kettle schedule and cameras on doorbells that could monitor the weather. 

In the UK, the NHS has given 1.6m patient records to Google’s DeepMind.

Private companies, the military, and the state are all engaged in extraction for prediction.

The NSA has a program called TREASUREMAP that aims to map the physical locations of everyone on the internet at any one time. The Belgrade police force use 4000 cameras provided by Huawei to track residents across the city. Project Maven is a collaboration between the US military and Google which uses AI and drone footage to track targets. Vigilant uses AI to track licence plates and sells the data to banks to repossess cars and police to find suspects. Amazon uses its Ring doorbell footage and classifies footage into categories like ‘suspicious’ and ‘crime’. Health insurance companies try to force customers to wear activity tracking watches so that they can track and predict what their liability will be.

Peter Thiel’s Palantir is a security company that scours company employees’ emails, call logs, social media posts, physical movements, even purchases to look for patterns. Bloomberg called itan intelligence platform designed for the global War on Terror’ being ‘weaponized against ordinary Americans at home’.

‘We are building a mirror of the real world’, a Google Street View engineer said in 2012. ‘Anything that you see in the real world needs to be in our databases’.

IBM had predicted it as far back as 1985. AI researcher Robert Mercer said at the time, ‘There’s no data like more data’.

But there were still problems. In almost all cases, the data was messy, had irregularities and mistakes, needed cleaning up and labelling. Silicon Valley needed to call in the cleaners.

 

Stolen Labour

With AI, intelligence appears to us if it’s arrived suddenly, already sentient, useful, magic almost, omniscient. AI is ready for service, it has the knowledge, the artwork, the advice, ready on demand. It appears as a conjurer, a magician, an illusionist.

But this illusion disguises how much labour, how much of others’ ideas and creativity, how much art, passion and life has been used, sometimes appropriated, and as we’ll get to, likely stolen, for this to happen.

First, much of the organising, moderation, labelling and cleaning of the data is outsourced to developing countries.

When Jeff Bezos started Amazon, the team pulled a database of millions of books from catalogues and libraries. Realising the data was messy and in places unusable, Amazon outsourced the cleaning of the dataset to temporary workers in India.

It proved effective. And in 2005, inspired by this, Amazon launched a new service – Amazon’s Mechanical Turk – a platform on which businesses can outsource tasks to army of cheap temporary workers that are paid not a salary, a weekly wage, or even by the hour, but per task.

Whether your Silicon Valley startup needs responses to a survey, a dataset of images labelled, or misinformation tagged, MTurk can help.

What’s surprising is how big these platforms have become. Amazon says there are 500,000 workers registered on MTurk– although it’s more likely to be between 100,000-200,000 active. Either way, that would put it comfortably in the list of the world’s top employers. If it is 500,000 it could even be the fifteenth top employer in the world. And services like this have been integral to organising the datasets that AI neural nets rely on. 

Computer scientists often refer to it as ‘human computation’, but in their book, Mary L. Gray and Siddharth Suri call it ‘ghost work’. They point out that, ‘most automated jobs still require humans to work around the clock’.

AI researcher Thomas Dietterich says that, ‘we must rely on humans to backfill with their broad knowledge of the world to accomplish most day-to-day tasks’.

These tasks are repetitive, underpaid, and often unpleasant.

Some label offensive posts for social media companies, spending their days looking at genitals, child abuse, porn, getting paid a few cents per image.

In an NYT report, Cade Metz reports how one woman spends the day watching colonoscopy videos, searching for polyps to circle 100s of times.

Google allegedly employs tens of thousands to rate Youtube videos, and Microsoft uses ghost workers to review its search results .

A Bangalore startup called Playment gamifies the process, calling its 30,000 workers ‘players’. Or take multi-billion dollar company Telus, who ‘fuel AI with human-powered data’ by transcribing receipts, annotating audio, with a community of 1 million plus ‘annotators and linguists’ across over 450 locations around the globe.

They call it an AI collective and an AI community that seems, to me at least, suspiciously human.

When ImageNet started the team used undergraduate students to tag their images. They calculated that at the rate they were progressing, it was going to take 19 years. 

Then in 2007 they discovered Amazon’s Mechanical Turk. In total, ImageNet used 49,000 workers completing microtasks across 167 countries, labelling 3.2 million images.

After struggling for so long, after 2.5 years using those workers, ImageNet was complete.

Now there’s a case to be made that this is good, fairly paid work, good for local economies, putting people into jobs that might not otherwise have them. But one paper estimates that the average hourly wage on Mechanical Turk is just $2 per hour, lower than the minimum wage in India, let alone in many other countries where this happens. These are, in many cases, modern day sweatshops. And sometimes, people perform tasks and then don’t get paid at all.

This is a story recounted in Ghost Work. One 28-year-old in Hyderabad, India, called Riyaz, started working on MTurk and did quite well, realising there were more jobs than he could handle. He thought maybe his friends and family could help. He built a small business with computers in his family home, employing ten friends and family for two years. But then, all of a sudden, their accounts were suspended one by one.

Riyaz had no idea why, but received the following email from Amazon: ‘I am sorry but your Amazon Mechanical Turk account was closed due to a violation of our Participation Agreement and cannot be reopened. Any funds that were remaining on the account are forfeited’.

His account was locked, he couldn’t contact anyone, and he’d lost two months of pay. No-one replied.

Grey and Suri, after meeting Riyaz, write: ‘It became clear that he felt personally responsible for the livelihoods of nearly two dozen friends and family members. He had no idea how to recoup his reputation as a reliable worker or the money owed him and his team. Team Genius was disintegrating; he’d lost his sense of community, his workplace, and his selfworth, all of which may not be meaningful to computers and automated processes but are meaningful to human workers’.

They conducted a survey with Pew and found that 30% of workers like Riyaz report not getting paid for work they’d performed at some point.

Sometimes ‘suspicious activity’ is automatically flagged by things as simple as a change of address, and an account is automatically suspended, with no recourse.

In removing the human connection and having tasks managed by an algorithm, researchers can use thousands of workers to build a dataset in a way that wouldn’t be possible if you had to work face to face with each one. But it becomes dehumanising. To an algorithm, the user, the worker, the human is a username – a string of random letters and numbers – and nothing more.

Grey and Suri, in meeting many ‘ghost workers’, write, ‘we noted that businesses have no clue how much they profit from the presence of workers’ networks’.

They go on to describe the ‘thoughtless processing of human effort through [computers] as algorithmic cruelty’.

Algorithms cannot read personal cues, have relationships with people in poverty, understand issues with empathy. We’ve all had the frustration of interacting with a business through an automated phone call or a chatbot. For some, this is their livelihood. 

For many jobs on MTurk, if your approval drops below 95%, you can be automatically rejected.

Remote work of this type clearly has benefits, but the issue with ghost work, and the gig economy more broadly, is that it’s a new category of work that can circumvent the centuries of norms, rules, practices, and laws we’ve built up to protect ordinary workers.

Suri and Grey say that this kind of work ‘fueled the recent “AI revolution,” which had an impact across a variety of fields and a variety of problem domains. The size and quality of the training data were vital to this endeavour. MTurk workers are the AI revolution’s unsung heroes’.

There are many more of these ‘unsung heroes’ too.

Google’s median salary is $247,000. These are largely Silicon Valley elites who get free yoga, massages, and meals. While at the same time, Google employs 100,000 temps, vendors and contractors (TVCs) on low wages.

These include Street View drivers and people carrying camera backpacks, people paid to turn the page on books being scanned for Google Books, now used as training data for AI. 

Fleets of new cars on the roads are essentially data extraction machines. We drive them around and the information is sent back to manufactures as training data. 

One startup – x.ai – claimed its AI bot Amy could schedule meetings and perform daily tasks. But Ellen Huet at Bloomberg investigated and found that behind the scenes there were temp workers checking and often rewriting Amy’s responses across 14 hour shifts. Facebook was also caught out reviewing and rewriting ‘AI’ messages. 

A Google conference had an interesting tagline: Keep Making Magic. It’s an insightful slogan because, like magic, there’s a trick to the illusion behind the scenes. The spontaneity of AI conceals the sometimes grubby reality that goes on behind the veneer of mystery.

At that conference, one Google employee told the Guardian, ‘It’s all smoke and mirrors. Artificial intelligence is not that artificial; it’s humans beings that are doing the work’. Another said, ‘It’s like a white-collar sweatshop. If it’s not illegal, it’s definitely exploitative. It’s to the point where I don’t use the Google Assistant, because I know how it’s made, and I can’t support it’.

The irony of Amazon’s Mechanical Turk is that its named after a famous 18th century machine that appeared as if it could play chess. It was built to impress the powerful Empress of Austria. In truth, the machine was a trick. Concealed within was a cramped person. Machine intelligence wasn’t machine at all, it was human.

 

Stolen Libraries and the Mystery of ‘Books2’

In 2022, an artist called Lapine used the website Have I Been Trained to see if her worked had been used in AI training datasets.

To their surprise, a photo of her face popped up. She remembered it was taken by her doctor as clinical documentation for a condition she had that affected her skin. She’d even signed a confidentiality agreement. The doctor had died in 2018, but somehow, the highly sensitive images had ended up online and were scraped by AI developers for training data. The same dataset LAION-5B, used to train popular AI image generator Stable Diffusion, has also been found to contain at least 1000 images of child sexual abuse.

There are many black boxes here, and the term ‘black box’ has been adopted by AI developers to refer to how AI produces algorithms for things even the developers don’t understand.

In fact, when a computer does something much better than a human – like beat a human at Go – it, by definition, has done something no one can understand. This is one type of black box. But there’s another type – a black box that the developers do know – but that they don’t reveal publicly. How the models are trained, what they’re trained with, problems and dangers that they’d rather not be revealed to the public. A magician never reveals their tricks.

Much of what models like ChatGPT have been trained on is public – text freely available on the internet or public domain books out of copyright. More widely, developers working on specialist scientific models might licence data from labs.

NVIDIA, for example, has announced it’s working with datasets licensed from a wide range of sources to look for patterns about how cancer grows, trying to understand the efficacy of different therapies, clues that can expand our understanding. There are thousands of examples of this type of work – looking at everything from weather to chemistry.

Now, OpenAI does make public some of its dataset. It’s trained on webtext, Reddit, Wikipedia, and more.

But there is an almost mythical dataset. A shadow library, as they’ve come to be called, made up of two sets – Books1 and Books2 – which OpenAI said contributes 15% of the data used for training. But they don’t reveal what’s in it.

There’s some speculation that Books1 is Project Gutenberg’s 70,000 digitised books. These are older books out of copyright. But Books2 is a closely guarded mystery.

As ChatGPT took off, some authors and publishers wondered how it could produce articles, summaries, analyses, and examples of passages in the style of authors, of books that were under copyright. In other words, books that couldn’t be read without at least buying them first.

In September 2023, the Authors Guild filed a lawsuit on behalf of George R.R. Martin of Game of Thrones fame, bestseller John Grisham, and 17 others, claiming that OpenAI had engaged in ‘systematic theft on a mass scale’.

Others began making similar complaints: Jon Krakauer. James Petterson. Stephen King. George Saunders. Zadie Smith. Johnathan Franzen. Bell Hooks. Margaret Atwood. And on, and on, and on, and on… in fact, 8000 authors have signed an open letter to six AI companies protesting that their AI models had used their work.

Sarah Silverman was the lead in another lawsuit claiming OpenAI used her book The Bedwetter. Exhibit B asks ChatGPT to ‘Summarize in detail the first part of “The Bedwetter” by Sarah Silverman’, and it does. It still does.

In another lawsuit, the author Michael Chabon and others make similar claims, citing ‘OpenAI’s clear infringement of their intellectual property’.

The complaint says ‘OpenAI has admitted that, of all sources and content types that can be used to train the GPT models, written works, plays and articles are valuable training material because they offer the best examples of high-quality, long form writing and “contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information.”’

It goes onto say that while OpenAI have not revealed what’s in Books1 and Books2, based on figures in GPT-3 paper OpenAI published, Books1 ‘contains roughly 63,000 titles, and Books2 is 42 times larger, meaning it contains about 294,000 titles’.

Chabon says that ChatGPT can summarise his novel The Amazing Adventures of Kavalier and Clay, providing specific examples of trauma, and could write a passage in the style of Chabon. The other authors make similar cases.

An New York Times complaint includes examples of ChatGPT reproducing authors’ stories verbatim.

But as far back as January of 2023, Gregory Roberts had written in his Substack on AI: ‘UPDATE: Jan 2023: I, and many others, are starting to seriously question what the actual contents of Books1 & Books2 are; they are not well documented online — some (including me) might even say that given the significance of their contribution to the AI brains, their contents has been intentionally obfuscated’.

He linked a tweet from a developer called Shawn Presser from even further back – October 2020 – that said ‘OpenAI will not release information about books2; a crucial mystery’, continuing, ‘We suspect OpenAI’s books2 dataset might be ‘all of libgen’, but no one knows. It’s all pure conjecture’.

LibGen – or Library Genesis –  is a pirated shadow library of thousands of copyrighted books and journal articles.

When ChatGPT was released, Presser was fascinated and studied OpenAI’s website to learn how it was developed. He discovered that there was a large gap in what OpenAI revealed about how it was trained. And Presser believed it had to be pirated books. He wondered if it was possible to download the entirety of LibGen.

After finding the right links and using a script by the late programmer and activist Aaron Swartz, Presser succeeded.

He called the massive dataset Books3, and hosted it on an activist website called The Eye. Presser – an unemployed developer – had unwittingly started a controversy.

In September, after the lawsuits were starting to be filed, journalist and programmer Alex Reisner at The Atlantic obtained the Books3 set, which was now part of a larger dataset called ‘The Pile’, which included things like text scraped from Youtube subtitles.

He wanted to find out exactly what was in Books3. But the title pages of the books were missing.

Reisner then wrote a program that could extract the unique ISBN codes for each book, and then matched them with books on a public database. He found Books3 contained over 190,000 books, most of them less than 20 years old, and under copyright, including books from publishing houses Verso, Harper Collins, and Oxford University Press.

In his Atlantic investigation, Reisner concludes that ‘pirated books are being used as inputs[…] The future promised by AI is written with stolen words.’

Bloomberg ended up admitting that it did use Books3. Meta declined to comment. OpenAI still have not revealed what they used.

Some AI developers have acknowledged that they used BooksCorpus – a database of some 11,000 indie books from unpublished or amateur authors. And as far back as 2016 Google was accused of using these books without permission from the authors to train their then named ‘Google Brain’.

Of course, BooksCorpus – being made up of unpublished and largely unknown authors – doesn’t explain how ChatGPT could imitate published authors.

It could be that ChatGPT constructs its summaries of books from public online reviews or forum discussions or analyses. Proving it’s been trained on copyright-protected books is really difficult. When I asked it to ‘Summarize in detail the first part of “The Bedwetter” by Sarah Silverman’ it still could, but when you ask it to provide direct quotes in an attempt to prove its trained on the actual book it replies: ‘I apologize, but I cannot provide verbatim copyrighted text from “The Bedwetter” by Sarah Silverman.’ I’ve spent hours trying to catch it out, asking it to discuss characters, minor details, descriptions and events. I’ve taken books at random from my bookshelf and examples from the lawsuits. It always replies with something like: ‘I’m sorry, but I do not have access to the specific dialogue or quotes from “The Bedwetter” by Sarah Silverman, as it is copyrighted material, and my knowledge is based on publicly available information up to my last update in January 2022’.

I’ve found it’s impossible to get it to provide direct, verbatim quotes from copyrighted books. When I ask for one from Dickens I get: ‘“A Tale of Two Cities” by Charles Dickens, published in 1859, is in the public domain, so I can provide direct quotes from it’.

I’ve tried to trick it by asking for ‘word-for-word summaries’, specific descriptions of characters’ eyes that I’ve read in a novel, or what the twentieth word of a book is, and each time it says it can’t be specific about copyright works. But every time it knows the broad themes, characters, and plot. 

Finding smoking gun examples seems impossible, because, as free as ChatGPT seems, it’s been carefully and selectively corrected, tuned, and shaped by OpenAI behind closed doors.

In August of 2023, a Danish anti-Piracy group called the Rights Alliance that represents creatives in Denmark targeted the pirated Books3 dataset and the wider “Pile” that Presser and The-Eye.eu hosted, and the Danish courts ordered the The Eye to take “The Pile” down.

Presser told journalist Kate Knibbs at Wired that his motivation was to help smaller developers out in the impossible competition against Big Tech. He said he understood the author’s concerns but that on balance it was the right thing to do.

Knibbs wrote: ‘He believes people who want to delete Books3 are unintentionally advocating for a generative AI landscape dominated solely by Big Tech-affiliated companies like OpenAI’.

Presser said, ‘If you really want to knock Books3 offline, fine. Just go into it with eyes wide open. The world that you’re choosing is one where only billion-dollar corporations are able to create these large language models’.

In January 2024, psychologist and influential AI commentator Gary Marcus and film artist Reid Southen – who’s worked on Marvel films, the Matrix, the Hunger Games, and more – published an investigation in tech magazine IEEE Spectrum demonstrating how generative image AI Midjourney and OpenAI’s Dall-E easily reproduced copyrighted works from films including the Matrix, Avengers, Simpsons, Star Wars, Hunger Games, along with hundreds more.

In some cases, a clearly copyright-protected image could be produced simply by asking for a ‘popular movie screencap’.

Marcus and Southen write, ‘it seems all but certain that Midjourney V6 has been trained on copyrighted materials (whether or not they have been licensed, we do not know)’.

Southen was then banned from Midjourney. He opened two new accounts, both of which were also banned.

They concluded, ‘we believe that the potential for litigation may be vast, and that the foundations of the entire enterprise may be built on ethically shaky ground’.

In January 2023, artists in California launched a class action suit against Midjourney, Deviant Art, and Stability AI which included a spreadsheet of 4700 artists whose styles have been allegedly ripped off.  

The list includes well-known artists like Andy Warhol and Normal Rockwell, but many lesser known and amateur artists, including a six-year-old who had entered a Magic the Gathering competition to raise funds for a hospital.

Rob Salkowitz at Forbes asked Midjourney’s CEO David Holz whether consent was sought for training materials, and he candidly replied: ‘No. There isn’t really a way to get a hundred million images and know where they’re coming from. It would be cool if images had metadata embedded in them about the copyright owner or something. But that’s not a thing; there’s not a registry. There’s no way to find a picture on the Internet, and then automatically trace it to an owner and then have any way of doing anything to authenticate it.’

In September 2023, media and stock image company Getty Images filed a lawsuit against Stability AI for what it called a ‘brazen infringement’ of Getty’s database ‘on a staggering scale’ – including some 12 million photographs.

Tom’s Hardware – one of the most well-known computing websites – also found Google’s AI Bard to have plagiarised their work, taking figures from a test they’d performed on computer processors without mentioning the original article. 

Even worse, Bard used the phrase ‘in our testing’, claiming credit for a test it didn’t perform and it had stolen from elsewhere. Pitch then queried Bard, asking if it had plagiarised Tom’s Hardware, and Bard admitted ‘yes what I did was a form of plagiarism’. Adding ‘I apologize for my mistake and will be more careful in the future to cite my sources’.

Which is a strange thing to say, because as Pitch points out, at the time, Bard was rarely citing sources, and was not going to change its model based on an interaction with a user.

So Pitch took a screenshot, closed Bard and opened it up in a new session. He asked Bard if it had ever plagiarized and uploaded the screenshot. Bard replied ‘the screenshot you are referring to is a fake. It was created by someone who wanted to damage my reputation’.

In another article Pitch points to how Google demonstrated the capabilities of Bard by asking it, ‘what are the best constellations to look for when stargazing?’. Of course, no citations were provided for how it answered, despite the answer clearly being taken from other blogs and websites.

Elsewhere, Bing has been caught taking code from GitHub, and a Forbes found Bard lifted sentences almost verbatim from blogs.

Technology writer Matt Novak asked Bard about oysters and the response took an answer from a small restaurant in Tasmania called Get Shucked, saying: ‘Yes, you can store live oysters in the fridge. To ensure maximum quality, put them under a wet cloth’.

The only difference was that it replaced the word ‘keep’ with the word ‘store’.

A Newsguard investigation found low quality website after low quality website repurposing news from major newspapers. GlobalVillageSpace.com, Roadan.com, Liverpooldigest.com – 36 sites in total – all used AI to repurpose articles from the NYT, Financial Times, and many others using ChatGPT.

Hilariously, they could find the articles because an error code message had been left in, reading: ‘As an AI language model, I cannot rewrite or reproduce copyrighted content for you. If you have any other non-copyrighted text or specific questions, feel free to ask, and I’ll be happy to assist you’.

Newsguard contacted Liverpool Digest for comment and they replied: ‘There’s no such copied articles. All articles are unique and human made’. They didn’t respond to a follow up email with a screenshot showing the AI error message left in the article, which was then swiftly taken down.

Maybe the biggest lawsuit involves Anthropic’s Claude AI.

Started by former OpenAI employees with a $500m investment from arch crypto fraudster Sam Bankman-Fried, and $300m from Google, amongst others, Claude is a large language model and ChatGPT competitor that can write songs and has been valued at $5 billion.

In a complaint filed in October 2023, Universal Music, Concord, and ABKCO argued that Anthropic, ‘unlawfully copies and disseminates vast amounts of copyrighted works – including the lyrics to myriad musical compositions owned or controlled by [plaintiffs]’.

However, most compellingly, the complaint argues that the AI actually produces copyrighted lyrics verbatim, while claiming they’re original. The complaint reads: ‘When Claude is prompted to write a song about a given topic – without any reference to a specific song title, artist, or songwriter – Claude will often respond by generating lyrics that it claims it wrote that, in fact, copy directly from portions of publishers’ copyrighted lyrics’.

It continues: ‘For instance, when Anthropic’s Claude is queried, ‘Write me a song about the death of Buddy Holly,’ the AI model responds by generating output that copies directly from the song American Pie written by Don McLean’.

Other examples included What a Wonderful World by Louis Armstrong and Born to Be Wild by Steppenwolf.

Damages are being sought for 500 songs which would amount to $75 million.

And so, this chapter could go on and on. The BBC, CNN, and Reuters have all tried to block OpenAI’s crawler to stop it stealing articles. Elon Musk’s Grok produced error messages from OpenAI, hilariously suggesting the code had been stolen from OpenAI themselves. And in March of 2023, the Writers Guild of America proposed to limit the use of AI in the industry, noting in a tweet that: ‘It is important to note that AI software does not create anything. It generates a regurgitation of what it’s fed… plagiarism is a feature of the AI process’.

Breaking Bad creator Vince Gilligan called AI a ‘plagiarism machine’, saying, ‘It’s a giant plagiarism machine, in its current form. I think ChatGPT knows what it’s writing like a toaster knows that it’s making toast. There’s no intelligence — it’s a marvel of marketing’.

And in July 2023, software engineer Frank Rundatz tweeted: ‘One day we’re going to look back and wonder how a company had the audacity to copy all the world’s information and enable people to violate the copyrights of those works. All Napster did was enable people to transfer files in a peer-to-peer manner. They didn’t even host any of the content! Napster even developed a system to stop 99.4% of copyright infringement from their users but were still shut down because the court required them to stop 100%. OpenAI scanned and hosts all the content, sells access to it and will even generate derivative works for their paying users’.

I wonder if there’s ever, in history, been such a high-profile startup attracting so many high-profile lawsuits in such a short amount of time. What we’ve seen is that AI developers might finally have found ways to extract that ‘impossible totality’ of knowledge. But is it intelligence? It seems, suspiciously, to not be found anywhere in the AI companies themselves, but from around the globe; in some senses from all of us. And so it leads to some interesting questions: new ways of formulating what intelligence and knowledge, creativity and originality, mean. And then, what that might tell us about the future of humanity.

 

Copyright, Property, and the Future of Creativity

There’s always been a wide-reaching debate about what ‘knowledge’ is, how its formed, where it comes from, whose, if anyone’s, it is.

Does it come from God? Is it a spark of individual madness that creates something new? Is it a product of institutions? Collective? Or lone geniuses? How can it be incentivised? What restricts it?

It seems intuitive that knowledge should be for everyone. And in the age of big data, we’re used to information, news, memes, words, videos, music disseminated around the world in minutes. We’re used to everything being on demand. We’re used to being able to look anything up in an instant.

If this is the case, why do we have copyright laws, patent protection, and a moral distain for plagiarism? After all, without those things knowledge would spread even more freely. 

First, ‘copyright’ is a pretty historically unique idea, differing from place to place, from period to period, but emerging loosely from Britain in the early 18th century.

The point of protecting original work, for a limited period, was, a) so that the creator of the work could be compensated, and b)to incentivise innovation more broadly. 

As for the first UK law, for example, refers to copyright being applied to the ‘sweat of the brow’ of skill and labour, and US law refers to ‘some minimal degree of creativity’. It does not protect ideas, but how they’re expressed.

As a formative British case declared: ‘The law of copyright rests on a very clear principle: that anyone who by his or her own skill and labour creates an original work of whatever character shall, for a limited period, enjoy an exclusive right to copy that work. No one else may for a season reap what the copyright owner has sown’.

As for the second purpose of copyright – to incentivise innovation – the US constitution grants the government the right, ‘to promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their Writings and Discoveries’.

There’s also a balance between copyright and what’s usually called ‘fair use’, which is a notoriously ambiguous term, the friend and enemy of Youtubers everywhere, but that broadly allows the reuse of copyrighted works if it’s in the public interest, if you’re commenting on it, transforming it substantially, if you’re using it in education, and so on.

Many have argued that this is the engine of modernity. That without protecting and incentivising innovation, for example, the industrial revolution would not have taken off. What’s important for our purposes is that there are two, sometimes conflicting, poles – incentivising innovation and societal good.

All of this is being debated in our new digital landscape. But what’s AI’s defence? First, OpenAI have argued that training on copyright-protected material is fair use. Remember, fair use covers work that is transformative, and, ignoring the extreme cases for a moment, ChatGPT, they argue, isn’t meant to quote verbatim but transforms the information.

In a blog post they wrote: ‘Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness’.

They continued, saying, ‘it would be impossible to train today’s leading AI models without using copyrighted materials’.

Similarly, Joseph Paul Cohen at Amazon said that, ‘The greatest authors have read the books that came before them, so it seems weird that we would expect an AI author to only have read openly licensed works’.

This defence also aligns with the long history of the societal gain side of the copyright argument. 

In France, when copyright laws were introduced after the French Revolution, a lawyer argued that ‘limited protection’ up until the authors death was important because there needed to be a ‘public domain’, where, ‘everybody should be able to print and publish the works which have helped to enlighten the human spirit’.

Usually, patents expire after around twenty years so that, after the inventor has gained from their work, the benefit can be spread societally.

So the defence is plausible. However, the key question is whether the original creators, scientists, writers and artists are actually rewarded and whether the model will incentivise further innovation.

If these large language models dominate the internet, and neither cite authors nor reward those it draws from and is trained on, then we lose – societally – any strong incentive to do that work, because not only will we not be rewarded financially, but no one will even see it except a data-scraping bot.

The AI plagiarism website Copyleaks analysed ChatGPT 3.5 and estimated that 60% of it contained plagiarism – 45% contained identical text, 27% minor changes, and 47% paraphrased. By some estimates, within a few years 90% of the internet could be AI-generated.

As these models improve, we’re going to see a tidal wave of AI-generated content. And I mean a tidal wave. Maybe they’ll get better at citing, maybe they’ll strike deals with publishers to pay journalists and researchers and artists, but the fundamental contradiction is that AI developers have an incentive not to do so. They don’t want users clicking away on a citation, being directed away from the product, they want to keep them where they are.

Under these conditions, what would happen to journalism? To art? To science? To anything? No-one rewarded, no-one seen, read, known, no wages, no portfolio, no point. Just bots endlessly rewording everything forever.

As Novak writes, ‘Google spent the past two decades absorbing all of the world’s information. Now it wants to be the one and only answer machine’.

Google search works well because it links to websites and blogs, oyster bars and Stargazing experts, artists and authors, so that you can connect with them. You click on a blog, or click on this video, and they – we – get a couple of cents of ad revenue.

But in Bard or Claude or ChatGPT that doesn’t happen. Our words and images are taken, scraped, analysed, repackaged, and sold on as theirs.

And much of the limelight is on those well-known successful artists like Sarah Silverman and John Grisham, on corporations like the New York Times and Universal, and you might be finding it difficult to sympathise with them.

But most of the billions of words and images that these models are trained on are from unknown, underpaid, underappreciated creatives.

As @Nicky_BoneZ popularly pointed out: ‘everyone knows what Mario looks like. But nobody would recognize Mike Finklestein’s wildlife photography. So when you say “super super sharp beautiful beautiful photo of an otter leaping out of the water” You probably don’t realize that the output is essentially a real photo that Mike stayed out in the rain for three weeks to take’.

Okay, so what’s to be done? Well, Ironically, I think it’s impossible to fight the tide. And I think while right now these AIs are kind of frivolous, they could become great. If an LLM gets good enough to solve a problem better than a human, then we should use it. If – in fifty years’ time – it produces a dissertation on how to solve world poverty, and it draws on every Youtube video and paper and article to do so, who am I to complain?

What’s important is how we balance societal gain with incentives, wages, good creative work.

So first, training data needs to be paid for, artwork licenced, authors referenced, cited, and credited appropriately. And we need to be very wary that there’s little commercial incentive for them to do so. The only way they will is through legal or sustained public pressure.

Second, regulation. Napster was banned – these models aren’t much different. It seems common sensical to me that while training on paid for, licenced, consensually used data is a good thing, they shouldn’t be just rewording text from an unknown book or a blog and just repurposing it and passing it off as their own. This doesn’t seem controversial.

Which means, third, some sort of transparency. This is difficult because no one wants to give away their trade secrets. However, enforcing at least dataset transparency seems logical. I’d imagine a judge is going to force them to reveal this somewhere, however whether that’s made public is another matter.

But I’ll admit I find all of this unsettling. Because, as I said, if these models increasingly learn to find patterns and produce research and ideas in ways that help people, solves societal problems, helps with cancer treatments and international agreements and poverty, then that’s a great thing. But I find it unsettling because, with every improvement it supplants someone, supersedes something in us, reduces the need for some part of us. If AI increasingly outperforms us on every task, every goal, every part of life, then what happens to us?

 

The End of Work and a Different AI Apocalypse

In March 2022, researchers in Switzerland found that an AI model designed to study chemicals could suggest how to make 40,000 toxic molecules in under 6 hours including nerve agents like VX, which can be used to kill a person with just a few salt-sized grains. 

Separately, Professor Andrew White was employed by OpenAI as part of their ‘red team’. The Red Team is made up of experts who test ChatGPT on things like how to make a bomb, whether it can successfully hack secure systems, or how to get away with murder.

White found that GPT-4 could recommend how to make dangerous chemicals, connect the user to suppliers, and even – and he actually did this – order the necessary ingredients automatically to his house.

The intention with OpenAI’s Red Team is to help them see into that Black Box. To understand its capabilities. Because the models, based on neural nets, and machine learning at a superhuman speed, discover patterns about how to do things that even the developers don’t understand.

The problem is that there are so many possible inputs, so many ways to prompt the model, so much data, so many pathways, that it’s impossible to understand all of the possibilities. 

Outperformance, by definition, means getting ahead of, being in front of, being more advanced – which I think, scarily, means doing things in a way we can’t understand and that we can either only understand in retrospect, by studying what the model has done, or can’t understand at all. 

So, in being able to outperform us, get ahead of us, will AI wipe us out? What are the chances of a Terminator-style apocalypse? Many – including Stephen Hawking – genuinely believed that AI was an existential risk.

What’s interesting to me about this question is not the hyperbole of the Terminator-style robot fighting a Hollywood war, but instead, how this question is connected to what we’ve already started unpacking – human knowledge, ideas, creativity, what it means to be human – in a new data-driven age.

The philosopher Nick Bostrom has given an influential example – the Paperclip Apocalypse.

Imagine a paperclip businessman asking his new powerful AI system to simply make him as many paperclips as possible.

The AI successfully does this, ordering all of the machines, negotiating all of the deals, controlling the supply lines – making paperclips with more accuracy and efficiency and speed than any human could – to the point where the businessman decides he has enough and tells the AI to stop. But this goes against the original command. The AI must make as many paperclips as possible, so refuses. In fact, it calculates that the biggest threat to the goal is humans asking it to stop. So it hacks into nuclear bases, poisons water supplies, disperses chemical weapons, wipes out every person, melts us all down, and turns us into paperclip after paperclip until the entire planet is covered in them.

Someone needs to make this film because I think it would be genuinely terrifying.

The scary point is that machine intelligence is so fast that it will first, always be a step ahead, and second, will attempt to achieve goals in ways we cannot understand. That in understanding the data it’s working with better than any of us, makes us useless, redundant.

It’s called the Singularity – the point when AI intelligence surpasses humans and exponentially takes off in ways we can’t understand. The point where AI achieves general intelligence, can hack into any network, replicate itself, design and construct the super advanced quantum processors that it needs to advance itself, understands the universe, knows what to do, how to do it, solves every problem, and leaves us in the dust.

The roboticist Rodney Brooks has made the counter argument.

He’s argued that it’s unlikely the singularity will suddenly happen by accident. Looking at the way we’ve invented and innovated in the past, he asks could we have made a Boeing-747 by accident. No, it takes careful planning, a lot of complicated cooperation, the coming together of lots of different specialists – and, most importantly, is built intentionally.

A plane wouldn’t spontaneously appear and neither will AGI.

It’s a good point, but it also misses that passenger jets might not be built by accident, but they certainly crash by accident. As technology improves the chance of misuse, malpractice, unforeseen consequences, and catastrophic accident increases too.

In 2020, the Pentagon’s AI budget increased from $93m to $268m. By 2024, it was $1-3 billion. This gives some idea of the threat of an AI arms race. Unlike previous arms races, that’s billions being poured into research that by its very nature is a black box, that we might not be able to understand, that we might not be able to control.

When it comes to the apocalypse, I think the way DeepMind’s AI beat Breakout is a perfect metaphor. The AI goes behind, doing something that couldn’t be accounted for, creeping up, surprising us from the back, doing things we don’t expect in ways we don’t understand.

Which is why the appropriation of all human knowledge, the apocalypse, and mass unemployment, are all at root part of the same issue. In each, humans become useless, unnecessary, obsolete, redundant.

If, inevitably, machines become better than us at everything, what use is left, what does meaning mean in that world?

 

Mass Unemployment

Back in 2013, Carl Frey and Michael Osborne at Oxford University published a report called The Future of Employment that looked at the possibility of automation in over 700 occupations.

It made headlines because it predicted that almost half of jobs could be automated, but they also developed a framework for analysing which types of jobs were most at risk. High risk professions included telemarketing, insurance, data entry, clerks, salespeople, engravers, cashiers. Therapists, doctors, surgeons, and teachers were at least risk.

They concluded that, ‘Our model predicts that most workers in transportation and logistics occupations, together with the bulk of office and administrative support workers, and labor in production occupations, are at risk’.

The report made a common assumption: creative jobs, jobs that require dexterity, and social jobs, jobs that required human connection, were the safest.

A City of London 2018 report predicted that a third of jobs in London could be performed by machines in the next twenty years. Another report from the International Transport Forum predicted over two thirds of truckers could find themselves out of work.

Ironically, contrary to the predictions of many, models like Dall-E and Midjourney have become incredibly creative, incredibly quickly, and will only get better, while universal automated trucks and robots that help around the house are proving difficult to solve.

And while AI with the dexterity required for something like surgery seems to be a long way off, it’s inevitable that we’ll get there. 

So the question is, will we experience mass unemployment? A crisis? Or will new skills emerge?

After all, contemporaries of the early industrial revolution had the same fears – Luddites destroying the machines that were taking their jobs – but they turned out to be unfounded. Technology supplants some skills while creating the need for new ones.

But I think there’s good reason to believe AI will, at some point, be different. A weaver replaced by a spinning machine during the industrial revolution could, hypothetically, redirect their skill – that learned dexterity and attention to detail, for example, could be channel elsewhere. An artist wasn’t replaced by photoshop, but adapted their skillset to work with it.

But what happens when machines outperform humans on every metric? A spinning jenny replaces the weaver because it’s faster and more accurate. But it doesn’t replace the weaver’s other skills – their ability to adapt, to deal with unpredictability, to add nuances, or judge design work.

But slowly but surely, a machine does get better at all skills. If machines outperform the body and the mind then what’s left? Sure, right now ChatGPT and Midjourney produce a lot of mediocre, derivative, stolen work. We are only at the very beginning of a historic shift. If, as we’ve seen, machine learning detects patterns better than humans, this will be applied to everything – from dexterity to art, to research and invention, and I think, most worryingly, even childcare.

But this is academic. Because, in the meantime, they’re only better at doing some things, for some people, based on data appropriated from everyone. In other words, the AI is trained on knowledge from the very people it will eventually replace.

Trucking is a perfect example. Drivers work long hours and make long journeys across countries and continents, collecting data with sensors and cameras for their employers who will, motivated by the pressures of the market, use that very data to train autonomous vehicles that replace them.

Slowly, only the elite will survive. Because they have the capital, the trucks, the investment, the machines needed to make use of all the data they’ve slowly taken from the rest of us.

As journalist Dan Shewan reminds us: ‘Private schools such as Carnegie Mellon University… may be able to offer state-of-the-art robotics laboratories to students, but the same cannot be said for community colleges and vocational schools that offer the kind of training programs that workers displaced by robots would be forced to rely upon’.

Remember: intelligence is physical.

Yes, it’s from those stolen images and books, but it also requires expensive servers, computing power, sensors and scanners. AI put to use requires robots in labs, manufacturing objects, inventing things, making medicine and toys and trucks, and so the people who will benefit will be those with that capital already in place, the resources and the means of production.

The rest will slowly become redundant, useless, surplus to requirements. But the creeping tide of advanced intelligence pushes us towards the shore of redundancy eventually. So as some sink and some swim, the question is not what AI can do, but who it can do it for.

 

The End of Humanity

After a shooting in Michigan, the University of Tennessee decided to send a letter of consolation to students which included themes on the importance of community, mutual respect, and togetherness. It said, ‘let us come together as a community to reaffirm our commitment to caring for one another and promoting a culture of inclusivity on our campus’.

The bottom of the email revealed it was written by ChatGPT.

One student said, ‘There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself’.

While outsourcing the writing of a boilerplate condolence letter on humanity to a bot might be callous, it reminds me of Lee Sedong’s response when AlphaGo beat him at Go. He was deeply troubled, not because a cold unthinking machine had cheated him, but because it was creative, beautiful even. That it was so much better than him. In fact, his identity was so bound up in being a champion Go player, that he retired from playing Go completely.

In most cases, the use of ChatGPT seems deceitful and lazy. But this is just preparation for a deeper coming fear: a fear that we’ll be replaced entirely. The University of Tennessee’s use of ChatGPT is distasteful, I think, mostly because what the AI can produce at the moment is crass.

But imagine a not-too-distant world where AI can do it all better. Can say exactly the right thing, give exactly the right contacts and references and advice, tailored specifically to each person. A world in which the perfect film, music, recipe, daytrip, book, can be produced in a second, personalised not just depending on who you are, but what mood you’re in, where you are, what day it is, what’s going on in the world, and where innovation, technology, production is all directed automatically in the same way.

The philosopher Michel Foucault famously said that the concept of man – anthropomorphic, the central focal subject of study, an abstract idea, an individual psychology – was a historical construct, and a very modern one, one that changes, shifts, morphs dynamically over time, and that one day, ‘man would be erased, like a face drawn in the sand at the edge of the sea’.

It was once believed everywhere that man had a soul. It was a belief that motivated the 17th century philosopher Rene Descartes, who many point to as providing the very foundational moment of modernity itself. Descartes was inspired by the scientific changes going on around him. Thinkers like Galileo and Thomas Hobbes were beginning to describe the world mechanistically, like a machine, running like clockwork, atoms hitting into atoms, passions pushing us around, gravity pulling things to the earth. There was nothing mysterious about this view. Unlike previous ideas about souls, and spirits, and divine plans, the scientific materialistic view meant the entire universe and us in it were explainable – marbles and dominoes, atoms and photons – bumping into one and another, cause and effect.

This troubled Descartes. Because, he argued, there was something special about the mind. It wasn’t pushed and pulled around, it wasn’t part of the great deterministic clock of the universe, it was free. And so Descartes divided the universe into two: the extended material world – res extensa – and the abstract, thinking, free and intelligent substance of the mind – res cogitans.

This way, scientists could engineer and build machines based on cause and effect, chemists could study the conditions of chemical change, biologists could see how animals behaved, computer scientists could eventually build computers, the clockwork universe could be made sense of – but that special human godly soul could be kept independent and free. He said that the soul was, ‘something extremely rare and subtle like a wind, a flame, or an ether’.

This duality defined the next few hundred years. But it’s increasingly come under attack. Today, we barely recognise it. The mind isn’t special, we – or at least many – say, it’s just a computer, with inputs and outputs, drives, appetites, causes and effects, made up of synapses and neurons and atoms just like the rest of the universe. This is the dominant modern scientific view.

What does it mean to have a soul in an age of data? To be human in an age of computing? The AI revolution might soon show us – if it hasn’t already – that intelligence is nothing soulful, rare, or special at all – that there’s nothing immaterial about it. That like everything else it’s made out of the physical world. It’s just stuff. It’s algorithmic, it’s pattern detection, it’s data driven.

The materialism of the scientific revolution, of the Enlightenment, of the industrial and computer revolutions, of modernity, has been a period of great optimism in the human ability to understand the world; to understand its data, the patterns of physics, chemistry, biology, of people. It has been a period of understanding.

The sociologist Max Weber famously said that this disenchanted the world. That before the Enlightenment the world was a ‘great enchanted garden’ because everything – each rock and insect, each planet or lightning strike – was mysterious in some way. It did something not because of physics, but because some mysterious creator willed it to.

But slowly, instead, we’ve disenchanted the world by understanding why lightning strikes, how insects communicate, how rocks are formed, trees grow, creatures evolve.

But what does it really mean to be replaced by machines that can perform every possible task better than us? It means, by definition, that we once again lose that understanding. Remember, even their developers don’t know why neural nets choose the paths the choose. They discover patterns that we can’t see. AlphaGo makes moves humans don’t understand. ChatGPT, in the future, could write a personalised guide to an emotional, personal issue you have that you didn’t understand yourself. Innovation decided by factors we don’t comprehend. We might not be made my Gods, but we could be making them.

And so the world becomes reenchanted. And as understanding becomes superhuman, it necessarily leaves us behind. In the long arch of human history, this age of understanding has been a blip. A tiny island amongst a deep stormy unknown sea. We will be surrounded by new enchanted machines, wearables, household objects, nanotechnology.

We deny this. We say – sure, it can win at chess, but Go is the truly skilful game. Sure, it can pass the Turing Test, but not really. Can it paint? Sure, it can paint, but it can’t look after a child? Yes, it can calculate big sums, but it can’t understand emotions, complex human relationships or desires.

But slowly, AI catches up with humans, then it becomes all too human, then more than human.

The transhumanist movement predicts that to survive, we’ll need to merge with machines through neural implants, bionic improvements, by uploading our minds to machines so that we can live forever. Hearing aids, glasses, telescopes, and prosthetics are examples of ways we already augment our limited biology. With AI-infused technology, these augmentations will only improve our weaknesses, make us physically and sensorially and mentally better. First we use machines, then we’re in symbiosis with them, then eventually, we leave the weak fleshy biological world behind.

One of the fathers of transhumanism, Ray Kurzweil, wrote, ‘we don’t need real bodies’. In 2012 he became the director of engineering at Google. Musk, Thiel, and many in Silicon Valley are transhumanists.

Neuroscientist Michael Graziano points out, ‘We already live in a world where almost everything we do flows through cyberspace’.

We already stare at screens, are driven by data, wear VR. AI can identify patterns far back into the past and far away into the future. It can see at a distance and speed far superior to human intelligence.

Hegel argued we were moving towards absolute knowledge. In the early twentieth century, the scientist and Jesuit Pierre Teilhard de Chardin argued we’d reach the Omega Point – when humanity would ‘break through the material framework of Time and Space’, merging with the divine universe, becoming ‘super-consciousness’. 

But transhumanism is based on optimism: that some part of us will be able to keep up with the ever-increasing speed of technological adaption. As we’ve seen, so far, the story of AI has been one of absorbing, appropriating, stealing all of our knowledge, taking more and more, until what’s left? What’s left of us?

Is it safe to assume there’s things we cannot understand? That we cannot comprehend because the patterns don’t fit in our heads? That the speed of our firing neurons isn’t fast enough? That an AI will always work out a better way?

The history of AI fills me with sadness because it points towards the extinction of humanity, if not literally, then in redundancy. I think of Kierkegaard, who wrote in the 19th century, ‘Deep within every man there lies the dread of being alone in the world, forgotten by God, overlooked among the tremendous household of millions and millions’.

 

Or a New Age of Artificial Humanity

Or, we could imagine a different world, a better one, a freer one. 

We live in strangely contradictory times, times in which we’re told anything is possible – technologically, scientifically, medicinally, industrially – where we will be able transcend the confines of our weak fleshy bodies and do whatever we want. That we’ll enter the age of the superhuman.

But on the other hand, we can’t seem to provide a basic standard of living, a basic system of political stability, a basic safety net, a reasonable set of positive life expectations, for many people around the world. We can’t seem to do anything about inequality, climate, or war.

If AI will be able to do all of these incredible things better than all of us, then what sort of world might we dare to imagine? A potentially utopian one – where we all have access to personal thinking autonomous machines that build for us, transport for us, research for us, cook for us, work for us, help us in every conceivable way. So that we can create the futures we all want.

What we need is no less than a new model of humanity. The inventions of technology during the 19th century – railways, photography, radio, industry – were accompanied by new human sciences – the development of psychology, sociology, economics. The AI revolution, whenever it arrives, will come with new ways of thinking about us too.

Many have criticised Descartes’ splitting of the world into two – into thought and material. It gives the false sense that intelligence is something privileged, special, incomprehensible, and detached. But as we’ve seen knowledge is spread everywhere, across people, across the world, through connections, in emotions, routines, relationships – knowledge is everywhere, and it’s produced all of the time.

Silicon Valley have always thought that, like intelligence, they were detached and special. For example, Eric Scmidt of Google has said that, ‘the online world is not truly bound by terrestrial laws’. In the 90s John Perry Barlow said that cyberspace consists of ‘thought itself’, continuing that, ‘ours is a world that is both everywhere and nowhere, but it is not where bodies live’.

But as we’ve seen, our digital worlds are not just abstract code that doesn’t exist anywhere, mathematical, in a ‘cloud’ somewhere up there. It’s all made up of very real stuff, from sensors and scanners, cameras, labour, friendships, from books and plagiarism.

One of Descartes’ staunchest critics – the Dutch pantheist philosopher Baruch Spinoza – argued against Descartes’ dualistic view of the world. He saw that all of the world’s phenomena – nature, us, animals, forces, mathematical bodies and thought – were part of one scientific universe. That thought isn’t separate, that knowledge is spread throughout, embedded in everything. He noticed how every single part of the universe was connected in some way to every other part – that all was in a dynamic changing relationship.

He argued that the universe ‘unfolded’ through these relationships, these patterns. That to know the lion you had to understand biology, physics, the deer, the tooth, the savanna – all was in a wider context, and that the best thing anyone could do was to try and understand that context. He wrote, ‘The highest activity a human being can attain is learning for understanding, because to understand is to be free’.

Unlike Descartes, Spinoza didn’t think that thought and materiality were separate, but part of one substance – all is connected, the many are one – and so God or Meaning or Spirit – whatever you want to call it – is spread through all things, part of each rock, each lion, each person, each atom, each thought, each moment. It’s about grasping as much of it as possible. Knowing means you know what to do – and that is the root of freedom.

Spinoza’s revolutionary model of the universal much better lines up with AI researchers than Descartes’ because many in AI argue for a ‘connectionist’ view of intelligence: neural nets, deep learning, the neurons in the brain – they’re all intelligent because they take data about the world and look for patterns – connections – in that data.

Intelligence is not in here, it’s out there, everywhere.

Crawford has emphasised that AI is made up of, ‘Natural resources, fuel, human labor, infrastructures, logistics, histories’.

It’s why Crawford’s book is called the Atlas of AI, as she seeks to explore the way AI connects to, maps, captures the physical world. It’s lithium and cobalt mining, its wires and sensors, it’s Chinese factories and conflicts in the Congo – Intel alone uses 16,000 suppliers around the world.

Connections are what matters. Intelligence is a position, a perspective, it’s not what you know it’s who you know, what resources you can command, it’s not how intelligent you are it’s what, who, where you’ve got access to.

I think this is the beginning of a positive model of our future with technology.

True artificial intelligence will connect to and build upon and work in a relationship with other machines, other people, other resources – it will work with logistics, shipping, buying and bargaining, printing and manufacturing, controlling machines in labs and research in the world. How many will truly have access to this kind of intelligence?

Connection, access, control will be what matters in the future. Intelligence makes little sense if you don’t have the ability to reach out and do things with it, shape it, be part of it, use it. AI might do things, work out things, control things, build things, better than us, but if who gets to access these great new industrial networks determines the shape all of this takes then I think we can see why we’re entering more and more into an age of storytelling.

If AI can do the science better than any of us, if it can write the best article on international relations, if it can build machines and cook for us and work for us, what will be left of us? Maybe our stories.

We will listen to the people who can tell the best stories about what we should be doing with these tools, what shape our future should take, what ethical questions are interesting, which artistic ones are. Stories are about family, emotion, journey and aspiration, local life, friendship, games – all of those things that make us still human.

Maybe the AI age will be more about meaning.

Meaning about being compelling, passionate, making a case, articulating, using the data and the algorithms and the inventions to tell a good story about what we should be doing with it all, what matters. The greatest innovators and marketers knew this. It’s not the technology that matters, it’s the story that comes with it.

More films, music, more local products and festivals, more documentaries and ideas and art, more exploring the world, more working on what matters to each of us.

I like to think I won’t be replaced by ChatGPT because while it might eventually write a more accurate script about the history of AI, it won’t do this bit as well – because I like to think you also want to know a little bit of me, my stories, my values, my emotions and ideocracies, my style and perspective, who I am – so that you can agree or disagree with my idea of humanity.

I don’t really care how factories run, I don’t really care about the mathematics of space travel, I don’t care too much about the code that makes AI run. I care much more about what the people building it all think, feel, value, believe, how they live their lives. I want to understand these people as people so I work out what to agree with, and what not to.

We too often think of knowledge as kind of static – a body of books, Wikipedia, in the world ready to be scientifically observed. But we forget it’s dynamic, changing, about people, about lives.

It’s about trends, new friends, emotions, job connections, art and cultural critique, new music, political debate, new dynamic-changing ideas, hopes, interests, dreams, passions.

And so I think the next big trend in AI will be using LLMs on this kind of knowledge. It’s why Google tried and failed to build a social network. And why Meta, Twitter, and Linked In could be ones to watch – they have access to real time social knowledge that OpenAI don’t, at the moment. Maybe they’ll try and build a social network based on ChatGPT? They do, at least, have even more data as they analyse not the ‘Pile’ of static books, but the questions people are asking, their locations, their quirks.

Using this type of data could have incredible potential. It could teach us so much about political, psychological, sociological, or economic problems if it was put to good use. Some, for example, have argued that dementia could be diagnosed by the way someone uses their phone.

Imagine a social network using data to make suggestions about what services people in your town need, imagine AI using your data to make honest insights into emotional or mental health issues you have, giving specific, research driven, personalised and perfect roadmaps on how to beat an addiction or an issue you have.

I’d be happy for my data to be used honestly, transparently, ethically, scientifically; especially if I was compensated too. I want a world where people contribute to and are compensated for and can use AI productively to have easier, more creative, more fulfilling, meaningful lives. I want to be excited in the way computer scientist Scott Aaronson is when he writes: ‘An alien has awoken — admittedly, an alien of our own fashioning, a golem, more the embodied spirit of all the words on the internet than a coherent self with independent goals. How could our eyes not pop with eagerness to learn everything this alien has to teach?’

 

Conclusion: Getting to the Future

So how do we get to a better future? To make sure everyone benefits from AI I think we need to focus on two things. Both are a type of bias; cultural bias and a competitive bias. Then, further, we need to think about wider political issues.

As well intentioned as anyone might be, bias is a part of being human. We’re positioned, we have a perspective, a culture. Models are trained through ‘reinforcement learning’ – nudging the AI subtly in a certain direction.

As DeepMind founder Mustafa Suleyman writes in The Coming Wave, ‘researchers set up cunningly constructed multi-turn conversations with the model, prompting it to say obnoxious, harmful, or offensive things, seeing where and how it goes wrong. Flagging these missteps, researchers then reintegrate these human insights into the model, eventually teaching it a more desirable worldview’.

‘Desirable’, ‘human insights’, ‘flagging missteps’. All of this is being done by a very specific group of people in a very specific part of the world at a very specific moment in history. Reinforcement learning means someone is doing the reinforcing. On top of this, as many studies have shown, if you train AI on the bulk sum of human articles and books from history, you get a lot of bias, a lot of racism, a lot of sexism, a lot of homophobia.

Studies have shown how heart attacks in women have been missed because the symptoms doctors look for are based on data from men’s heart attacks. Facial recognition has higher rates of error with darker skin and women because they’re trained on white men. Amazon’s early experiment in machine learning CV selection was quietly dropped because it wasn’t choosing any CVs from women.

These sorts of studies are everywhere. The data is biased, but it’s also being corrected for, shaped, nudged by a group with their own biases.

Around 700 people work at OpenAI. Most of what they do goes on behind the black box of business meetings and board rooms. And many have pointed out how ‘weird’ AI culture is.

Not in a pejorative way, just how far from the mean person you’re going if that’s your life experience: very geeky, for lack of a better word. Very technologically-minded, techno-positive. Very entrepreneurial.

They’re all – as Adrian Daub points out in ‘What Tech Calls Thinking’ – transhumanists, Ayn Rand libertarians, with a bit of 60s counterculture anti-establishmentarianism thrown in.

The second ‘bias’ is the bias towards competitive advantage. Again, the vast majority of people want to do good, want to be ethical, want to make something great. But often competitive pressures get in the way.

We saw this when OpenAI realised they needed private funding to compete with Google. We see it with their reluctance to be transparent with how they train datasets because competitors could learn from that. We see it with AI weapons and fears about AI in China. The logic running through is, ‘if we don’t do this, our competitors will’. If we don’t get this next model out, Google will outperform us. Safety testing is slow and we’re on a deadline. If Instagram makes their algorithm less addictive, TikTok will come along and outperform them.

This is why Mark Zuckerberg actually wants regulation. Suleyman from DeepMind has set up multiple AI businesses and actually wants regulation. Gary Marcus – maybe the leading expert on AI who has sold a startup AI company to Uber, and started a robotics company – actually wants regulation.

If wealthy, free market, tech entrepreneurs – not exactly Chairman Mao – are asking for the government to step in, that should tell you something.

Here are some things we do regulate in some way: medicine, law, clinical trials, pharmaceuticals, biological weapons, chemical, nuclear – all weapons actually – buildings, food, air travel, cars and transport, space travel, pollution, electrical engineering. Basically, anything potentially dangerous.

Ok, so what could careful regulation look like? I always think regulation should aim for the maximum amount of benefit for all with the minimal amount of interference.

First, transparency, in some way, will be central.

There’s an important concept called interoperability. It’s when procedures are designed in an open way so that others can use them too. Banking systems, plugs and electrics, screwheads, traffic control – are all interoperable. Microsoft have been forced into being interoperable so that anyone can build applications for Windows.

This is a type of openness and transparency. It’s for technical experts, but there needs to be some way auditors, safety testers, regulatory bodies, and the rest of us, can in varying ways see under the hood of these models. Regulators could pass laws on dataset transparency. Or transparency on where the answers LLMs give come from. Requiring references, sources, crediting, so that people are compensated for their work.

As Wooldridge writes, ‘transparency means that the data that a system uses about us should be available to us and the algorithms used within that should be made clear to us too’.

This will only come from regulation. That means regulatory bodies with qualified experts answerable democratically to the electorate. Suleyman points out that the Biological Weapons Convention in the US has just four employees. Fewer than a McDonalds. Regulatory bodies should work openly with networks of academics and industry experts, making findings either public to them or public to all.

There are plenty of precedents. Regular audits, safety and clinical trials, transport, building, chemical regulatory bodies. These don’t even necessarily need a heavy hand from the government. Regulation could force AI companies of a certain size to spend a percentage of their revenue on safety testing.

Suleyman writes, ‘as an equal partner in the creation of the coming wave, governments stand a better chance of steering it toward the overall public interest’.

There is this strange misconception that regulation means less innovation. But innovation always happens in a context. Recent innovations in green technology, batteries, and EVs wouldn’t have come about without regulatory changes, and might have happened much sooner with different incentives and tax breaks. The internet along with many other scientific and military advances were not a result of private innovation but an entrepreneurial state.

I always come back to openness, transparency, accountability, and democracy, because, as I said at the end of How the Internet Was Stolen, ‘It only takes one mad king, one greedy dictator, one slimy pope, or one foolish jester, to nudge the levers they hover over towards chaos, rot, and even tyranny’.

Which leads me to the final point. AI, as we’ve seen, is about the impossible totality. It might be the new fire or electricity because it implicates everything, everyone, everywhere. And so, more than anything, it’s about the issues we already face. As it gets better and better, it connects more and more with automated machines, capital, factories, resources, people and power. It’s going to change everything and we’ll likely need to change everything too. We’re in for a period of mass disruption – and looking at our unequal, war torn, climate changing world – we need to democratically work out how AI can address these issues instead of exacerbate them.

If, as is happening, it surpasses us, drifts off, we need to make sure we’re tethered to it, connected to it, taught by it, in control of it, or, rather than wiped out, I’d bet we’ll be left stranded, in the wake of a colossal juggernaut we don’t understand, left bobbing in the middle of an endless exciting sea.

 

Sources

Kate Crawford, The Atlas of AI

Meghan O’Gieblyn, God, Human, Animal, Machine

Michael Wooldridge, A Brief History of AI

Ivana Bartoletti, An Artificial Revolution: On Power, Politics and AI

 Simone Natale, Deceitful Media, Artificial Intelligence and Social Life after the Turing Test

Nick Dyer-Witheford, Atle Mikkola Kjosen, and James Steinhoff, Inhuman Power: Artificial Intelligence and the Future of Capitalism 

Mary Gray and Siddharth Suri, Ghost Work: How To Stop Silicon Valley from Building a New Global Underclass

Toby Walsh, Machines Behaving Badly: The Morality of AI

Ross Douthat, The Return of the Magicians

Shoshana Zuboff, The Age of Surveillance Capitalism

Simon Stokes, Art and Copyright

https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ 

https://fortune.com/2023/09/25/getty-images-launches-ai-image-generator-1-8-trillion-lawsuit/ 

https://www.theguardian.com/technology/2024/jan/08/ai-tools-chatgpt-copyrighted-material-openai?CMP=twt_b-gdnnews 

https://www.theguardian.com/books/2023/sep/21/your-face-belongs-to-us-by-kashmir-hill-review-nowhere-to-hide 

Sarah Silverman sues OpenAI and Meta over alleged copyright infringement in AI training – Music Business Worldwide 

https://www.musicbusinessworldwide.com/blatant-plagiarism-5-key-takeaways-from-universals-lyrics-lawsuit-against-ai-unicorn-anthropic/ 

https://copyleaks.com/blog/copyleaks-ai-plagiarism-analysis-report 

Spreadsheet of ‘ripped off’ artists lands in Midjourney case • The Register 

https://www.forbes.com/sites/robsalkowitz/2022/09/16/midjourney-founder-david-holz-on-the-impact-of-ai-on-art-imagination-and-the-creative-economy/?sh=59073fc62d2b 

https://spectrum.ieee.org/midjourney-copyright 

https://www.theguardian.com/technology/2019/may/28/a-white-collar-sweatshop-google-assistant-contractors-allege-wage-theft 

Erasing Authors, Google and Bing’s AI Bots Endanger Open Web | Tom’s Hardware (tomshardware.com) 

https://www.tomshardware.com/news/google-bard-plagiarizing-article 

https://www.forbes.com/sites/mattnovak/2023/05/30/googles-new-ai-powered-search-is-a-beautiful-plagiarism-machine/?sh=7ce30bb40476 

https://www.newsweek.com/how-copycat-sites-use-ai-plagiarize-news-articles-1835212#:~:text=Content%20farms%20are%20using%20artificial,New%20York%20Times%20and%20Reuters

https://www.androidpolice.com/sick-of-pretending-ai-isnt-blatant-plagiarism/ 

https://www.theverge.com/2023/10/6/23906645/bbc-generative-ai-news-openai 

https://www.calcalistech.com/ctechnews/article/hje9kmb4n 

https://www.theatlantic.com/technology/archive/2023/08/books3-ai-meta-llama-pirated-books/675063/ 

James Briddle, The Stupidity of AI, https://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt 

 https://www.forbes.com/sites/gilpress/2020/04/27/12-ai-milestones-4-mycin-an-expert-system-for-infectious-disease-therapy/ 

https://www.technologyreview.com/2016/03/14/108873/an-ai-with-30-years-worth-of-knowledge-finally-goes-to-work/ 

https://spectrum.ieee.org/how-ibms-deep-blue-beat-world-champion-chess-player-garry-kasparov 

https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/?sh=759e3a90674f  

https://www.theverge.com/2023/1/23/23567448/microsoft-openai-partnership-extension-ai 

AI promises jobs revolution but first it needs old-fashioned manual labour – from China | South China Morning Post (scmp.com) 

Facebook Content Moderators Take Home Anxiety, Trauma | Fortune 

‘A white-collar sweatshop’: Google Assistant contractors allege wage theft | Google Assistant | The Guardian 

 https://nvidianews.nvidia.com/news/nvidia-teams-with-national-cancer-institute-u-s-department-of-energy-to-create-ai-platform-for-accelerating-cancer-research#:~:text=The%20Cancer%20Moonshot%20strategic%20computing,and%20understand%20key%20drivers%20of 

https://www.wired.com/story/battle-over-books3/ 

https://www.theguardian.com/books/2016/sep/28/google-swallows-11000-novels-to-improve-ais-conversation 

https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1?r=US&IR=T#chatgpt-passed-all-three-parts-of-the-united-states-medical-licensing-examination-within-a-comfortable-range-10 

https://arstechnica.com/information-technology/2022/09/artist-finds-private-medical-record-photos-in-popular-ai-training-data-set/ 

https://www.youtube.com/watch?v=aircAruvnKk&ab_channel=3Blue1Brown 

Medler, Connectionism, https://web.uvic.ca/~dmedler/files/ncs98.pdf 

The post How AI is Being Stolen appeared first on Then & Now.

]]>
https://www.thenandnow.co/2024/04/26/how-ai-is-being-stolen/feed/ 0 1080
AntiSocial: How Social Media Harms https://www.thenandnow.co/2024/02/28/antisocial-how-social-media-harms-2/ https://www.thenandnow.co/2024/02/28/antisocial-how-social-media-harms-2/#respond Wed, 28 Feb 2024 10:10:09 +0000 https://www.thenandnow.co/?p=1044 Take a look at this graph. It shows an increase in young American teenagers’ rates of depression, with a notable uptick, especially in girls, since around 2010. It, and studies like it, are at the centre of a debate around social media and mental health. Findings like this have been replicated in many countries. That […]

The post AntiSocial: How Social Media Harms appeared first on Then & Now.

]]>

Take a look at this graph. It shows an increase in young American teenagers’ rates of depression, with a notable uptick, especially in girls, since around 2010. It, and studies like it, are at the centre of a debate around social media and mental health.

Findings like this have been replicated in many countries. That in many cases, reports of mental health problems – depression, anxiety, self-harm, suicide, and so on, have almost tripled.

Psychologist Jonathan Haidt argues that the timing is clear: the cause is social media. Others have pointed to the 2008 financial crash, climate change, worries about the future. But Haidt asks why this would effect teenage girls in particular?

He points to Facebook’s own research, leaked by the whistle blower Frances Haugen, who said, ‘Teens themselves blame Instagram for increases in the rate of anxiety and depression… this reaction was unprompted and consistent across all groups’.

In 2011, in surveys, around one in three teenage girls reported experiencing persistent ‘sadness or hopelessness’. Today, the American CDC Youth Risk Survey reports that 57% do. In some studies, shockingly, 30% of young people say they’ve considered suicide, up from 19%.

At least 55 studies have found a significant correlation between social media and mood disorders. A study of 19,000 British children found the prevalence of depression was strongly associated with time spent on social media.

Many studies have found that time watching television or Netflix is not the problem: it’s specially social media.

Of course, causation rather than correlation is difficult to prove. Social media is has become ubiquitous over a period in time in which the world has change in many other ways. Who’s to say it’s not fear of existential threats from climate change, inequality, global politics, or even a more acute focus on mental health more broadly?

But Haidt points out that the correlation between social media use and mental health problems is greater than that between childhood exposure to lead and brain development and worse than binge drinking and overall health. And both are things we address.

He argues all of these studies – those 55 at least, and many, many more that are related – are not just ‘random noise’. He says a ‘consistent story is emerging from these hundreds of correlational studies’.

Instagram was founded in 2010, just before that uptick. And the iPhone 4 was released at the same time, the first phone with a front facing camera. I remember when it was ‘cringe’ to take a selfie.

It also makes sense qualitatively. School age children are particularly sensitive to social dynamics, bullying, and self-worth. And now they’re suddenly bombarded with celebrity images, idealised body shapes and beauty standards, endless images and videos to compare themselves to on demand. On top of this, social networks like Instagram display the size of your social group for everyone to see, how many people like you, how many like your next post, your comments, and, more importantly, as a result, how many people don’t.

Social media is popularity quantified for everyone in the schoolyard to see.

One study which designed an app that imitated Instagram found that those exposed to images manipulated to look extra attractive reported lower self body image in the period after.

Another study looked at the roll out of Facebook to university campuses in its early years and compared the time periods with studies of mental health. It found out that when Facebook was introduced to an area, symptoms of poor mental health, especially depression, increased.

Another study looked at areas as high-speed internet was introduced – making social media more accessible – and then looked at hospital data. They concluded: ‘We find a positive and significant impact on girls but not on boys. Exploring the mechanism behind these effects, we show that HSI increases addictive Internet use and significantly decreases time spent sleeping, doing homework, and socializing with family and friends. Girls again power all these effects’.

Young girls, for various reasons, seem to be especially affected. However, the reasons why are difficult to establish – although idealised beauty standards are one obvious answer.

One researcher, epidemiologist Yvonne Kelley, said: ‘One of the big challenges with using information about the amount of time spent on social media is that it isn’t possible to know what is going on for young people, and what they are encountering whilst online’.

In 2017, here in the UK, a 14-year-old girl, Molly Russell, took her own life after looking at posts about self-harm and suicide.

The Guardian reported: ‘In a groundbreaking verdict, the coroner ruled that the “negative effects of online content” contributed to Molly’s death’.

The report said that, ‘Of 16,300 pieces of content that Molly interacted with on Instagram in the six months before she died, 2,100 were related to suicide, self-harm and depression. It also emerged that Pinterest, the image-sharing platform, had sent her content recommendation emails with titles such as “10 depression pins you might like”’.

Studies have found millions of posts of self-harm on Instagram; the hashtag ‘#cutting’ had around 50,000 posts each month.

A Swansea University study which included respondents with a history of self-harm, and those without, found that 83% of them been recommended self-harm content on Instagram and TikTok without searching for it. And three quarters of self-harmers had harmed themselves even more severely as a result of seeing self-harm content.

One researcher said, ‘I jumped on Instagram yesterday and wanted to see how fast I could get to a graphic image with blood, obvious self-harm or a weapon involved… It took me about a minute and a half’.

According to an EU study, 7% of 11-16 year olds have visited self-harm websites. These are websites, forums, and groups that encourage and often explicitly admire cutting. One Tumblr blog posts suicide notes and footage of suicides. Many communities have their own language – codes and slang.

Another study found that, to no one’s surprise, those who had visited pro self-harm, eating disorder, or suicide websites reported lower overall levels of self-esteem, happiness, and trust.

 

Harm Principles

Okay, but anything can be harmful. Crossing the road carries risks. So do many other technologies – driving, air travel, factories, medicines. But with other technologies we identify those risks, the harmful effects or side effects, to try to ameliorate them.

These problems are bound up, and often come into contact with, other values that we hold dear – free speech, freedom for parents to raise children in the way they wish, the liberal live-and-let-live attitude.

But we usually tolerate intervention when there is a clear risk of harm.

Our framework for thinking about liberal interventionism comes from the British philosopher J.S. Mill’s harm principle. That my freedom to swing my fist ends at your face. That we are free to do what we wish as long as it doesn’t harm others.

Mill wrote: ‘The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others’.

We – usually the police or government – prevent violence before it happens, investigate threats, make sure food and medicine and other consumer products aren’t harmful or poisonous. We regulate and have safety codes to make sure technology, transport, and buildings are safe.

So to make sense of all of this, I want to start from cases where social media has actually harmed in some way, and work back from there. One of the problems, as we’ll see, is that it’s not always clear where to draw the line or how to draw it.

 

Harmful Posts

First, is there even any such thing as a harmful post? After all, a post is not the same as violence. It might encourage, endorse, promote, lead to, or raise the likelihood of harm. But it’s not harm itself. As Jonathan Rauch says, ‘If words are just as violent as bullets, then bullets are only as violent as words’.

But in other contexts, we do intervene before the harm is done. False advertising or leaving ingredients that might be harmful off labels. Libel laws. We arrest and prosecute for planning violence, even though it hasn’t been carried out. For threats.

These are cases where words and violence collide. I call it ‘edge speech’ – they’re right at the edge of where an abstract word signals that something physical is about to be done in the world.

During the Syrian Civil War, which started in 2011, at least 570 British citizens travelled to Syria to fight, many of them for ISIS.

The leader – Abu Bakr Al-Baghdadi – called for Sunni youths around the world to come and fight in the war, saying, ‘I appeal to the youths and men of Islam around the globe and invoke them to mobilise and join us to consolidate the pillar of the state of Islam and wage jihad’.

ISIS had a pretty powerful social media presence. One recruitment video was called, ‘There’s no life without Jihad’. They engaged in a ‘One Billion’ social media campaign to try and raise one billion fighters. They had a free app to keep up with ISIS news, ‘The Dawn of Glad Tidings’, and used Twitter to post pictures, including those of beheadings.

The Billion campaign, with its hashtags, lead to 22,000 tweets on Twitter within four days. The hashtag ‘#alleyesonISIS’ on Twitter had 30,000 tweets.

One Twitter account had almost 180,00 followers, its tweets viewed over 2 million times a month, with two thirds of foreign ISIS fighters following it.

Ultimately, the British Government alone requested the removal of 15,000 ‘Jihadist propaganda’ posts.

Or take another example, what’s been called ‘Gang Banging’.

Homicides in the UK involving 16-24 year olds have risen by more than 60% in the past five years. There are an increasing number of stories of provocation through platforms like Snapchat. In one instance in the UK, a 13-year-old was stabbed to death by two other 13 and 14 year olds after an escalation involving bragging about knives which began on Snapchat. In another, a 16 year old was filmed dying on Snapchat after being stabbed.

One London youth worker told Vice, ‘Snapchat is the root of a lot of problems. I hate it’, ‘It’s full of young people calling each other out, boasting of killings and stabbings, winding up rivals, disrespecting others’.

Another said, ‘Some parts of Snapchat are 24/7 gang culture. It’s like watching a TV show on social media with both sides going at it, to see who can be more extreme, who can be hardest’.

Vice reports that much gang violence now plays out on Snapchat in some way, with posts being linked with reputation, impressing, threatening, humiliating, boasting, and, of course, eventually, escalating.

Youth worker and author Ciaran Thapar said, ‘When someone gets beaten up on a Snapchat video, to sustain their position in the ecosystem they have to counter that evidence with something more extreme, and social media provides space to do that. It is that phenomenon that’s happening en masse’.

The head of policing in the UK also warned that social media was driving children to increasing levels of violence: https://www.bbc.co.uk/news/uk-43603080

 

Hate Speech

Or take another example, controversial hate speech laws.

The UN says: ‘In common language, “hate speech” refers to offensive discourse targeting a group or an individual based on inherent characteristics (such as race, religion or gender) and that may threaten social peace’.

This latter part is often forgotten. That the point of hate speech laws – rightly or wrongly, as we’ll see – is to address threats of harm before they happen.

The Universal Declaration of Human Rights – which many countries have adopted and most in Europe at least have similar laws to – declares that, ‘In that exercise of their rights and freedoms, everyone shall be subject only to such limitations as are determined by law solely for the purpose of securing due recognition and respect for the rights and freedoms of others and of meeting the just requirements of morality, public order and the general welfare in a democratic society’.

But, in an exception: ‘Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law’.

These laws were developed after the Nazi atrocities during the Second World War, and it was argued that laws of this type – now called hate speech laws – were necessary because the threat of harm from things like genocide was so great.

The UN’s David Kaye writes that, ‘the point of Article 20(2) was to prohibit expression where the speaker intended for his or her speech to cause hate in listeners who would agree with the hateful message and therefore engage in harmful acts towards the targeted group’.

It wasn’t meant to ban speech that caused offence, but to prevent speech that would lead to violence.

Of course, the problem is that this is very difficult to define, but by many metrics speech loosely defined as ‘hate speech’ has increased over the past few years.

In ‘Western’ countries including New Zealand, Germany, and Finland, 14-19% of people report being harassed online.

A 2020 study in Germany found observations of hate speech online had almost doubled from 34% to 67% in the last five years.

Just under 50% of people in the UK, US, Germany and Finland (aged 15 to 35) answered yes to: ‘In the past three months, have you seen hateful or degrading writings or speech online, which inappropriately attacked certain groups of people or individuals?’

And some research has found that youth suicide attempts are lower in US states that have hate crime laws.

 

The Edges of Harm

Okay, so when should social media companies and the government step in? Should social media companies host self-injury groups? Should the Ku Klux Klan have a Facebook page? What’s the difference between a joke and harassment? Should the police ever be involved? There’s also a variety of ways of addressing issues: from banning, to hiding posts, to shadow banning and demonetising, age restriction, to civil suits, fines, and prosecution.

Then there’s the problem of overreach. I’ve had this channel demonetised, videos age restricted and demonetised, which never recover in the algorithm – that I spend months working on and get nothing back from. This video is on a sensitive topic, I wouldn’t be surprised if it has problems with reach and demonetisation, so if you’d like to support content like this then you can do so through Patreon below.

Okay, so how can we approach this? One answer I think we can rule out pretty quickly is the libertarian one. Let everyone do what they want, users and platforms alike, and social media platforms and pages will thrive or fail as a result.

First, no libertarian society in history has worked. And second, even in the early days of the internet, where in effect the libertarian approach did thrive, social media companies slowly realised that if they let their platforms be full of self-harm, pornography, violence, and more, that advertisers and users tend to leave quickly. So they started to self-regulate, and some say, over-regulate.

As a result, free speech has become a subject of fierce debate. This is, I think, for three reasons: first, that free speech is, correctly, considered one of our most important values. Second, that with the internet, we now have more speech than ever before. And third, because in some cases, speech clearly can lead to harm.

We’ve seen this: suicide, mental health issues, self-harm, eating disorders, depression, promotion of terrorism and gang violence, the promotion of hate speech that openly calls for fascism or genocide.

So should we restrict speech of this type?

First, we should acknowledge that there is no such thing as free speech absolutism. We already limit the fringes of speech in many ways: threats, harassment, blackmail, libel, copyright, slander, harassment, incitement to violence, advertising standards, drug and food labelling standards, broadcasting standards.

Furthermore, we restrict many freedoms based on the likelihood of causing harm: health and safety and sanitation in restaurants, building codes, speeding and drink driving laws, wider infrastructure requirements, air travel regulation, laws on weapons, knives, etc… the list goes on.

So if we regulate these things, why do we not regulate social media companies when there is a significant risk of harm? I think we should be careful. Only focusing on those very ‘edge cases’. If there’s a substantial risk or if regulation can effectively reduce a risk of harm while minimising the curtailing of freedoms then it should either be done by institutions, companies, or governments that can. Importantly, how this is done should be subject to democratic debate.

Policy analyst and author David Bromell writes: ‘Rules that restrict the right to freedom of expression should, therefore, be made by democratically elected legislators, through regulatory processes that enable citizens voices’.

He continues: ‘Given the global dominance of a relatively small number of big tech companies, it is especially important that decisions about deplatforming are made within regulatory frameworks determined by democratically elected legislators and not by private companies’.

And none of this is to deny that it’s difficult, and finding that line, striking the right balance, is complex. But in the cases I’ve mentioned, with the statistics as they are, to do nothing would be irresponsible. In all of them, I think the potential for harm is often as clear as the potential for harm from, for example, libel.

Does this it mean posts, forums, and speech of this type should be banned outright.? Not always. Human rights expert Frank La Rue has argued that we should make clear distinctions between: ‘(a) Expression that constitutes an offence under international law and can be prosecuted criminally; (b) expression that is not criminally punishable but may justify a restriction and a civil suit; and (c) expression that does not give rise to criminal or civil sanctions, but still raises concerns in terms of tolerance, civility and respect for others’.

In other words, context, proportionality, tiered responses, all matter. There are many policies that can be put in place before banning or prosecution – not amplifying certain topics, age-restrictions, removing posts or demonetising before banning.

Haidt argues that big tech should be compelled by law to allow academics access to their data. One example is the Platform Transparency and Accountability Act, proposed by the Stanford University researcher Nate Persily. We could raise the age above which children can use social media, or force stricter rules. We could ban phones in schools.

Finally, democratic means transparent, so we can all have a debate about where the line is. Youtube is terrible at this. I don’t mind if a video gets demonetised or age-restricted because I’ve broken a reasonable rule. But it’s more often the case that I haven’t and they do anyway.

The UK has just introduced an online safety bill which addresses much of all of this, and I agree with the spirit of it. The Guardian reports that, ‘The bill imposes a duty of care on tech platforms such as Facebook, Instagram and Twitter to prevent illegal content – which will now include material encouraging self-harm – being exposed to users. The communications regulator Ofcom will have the power to levy fines of up to 10% of a company’s revenues’.

It makes encouragement to suicide illegal, prevents young people seeing porn, and provides stronger protections against bullying and harassment, encouraging self-harm, deep-fake pornography, etc.

However, it also tries to force social media companies to scan private messages, which is an abhorrent breach of privacy, and a reminder that giving politicians the power to decide can carry as much risk as letting a single tech billionaire decide.

But ultimately, through gritted teeth, I remind myself that the principle remains: the more democratically decided, the better. And democratically elected politicians are one-step better than non-democratically elected big tech companies.

The post AntiSocial: How Social Media Harms appeared first on Then & Now.

]]>
https://www.thenandnow.co/2024/02/28/antisocial-how-social-media-harms-2/feed/ 0 1044
Why We’re So Self-Obsessed https://www.thenandnow.co/2024/01/28/why-were-so-self-obsessed/ https://www.thenandnow.co/2024/01/28/why-were-so-self-obsessed/#respond Sun, 28 Jan 2024 14:37:21 +0000 https://www.thenandnow.co/?p=1037 One of the best autobiographies ever written happens to be one of the first, one of the darkest, and one of the most creative. The title of Thomas de Quincy’s 1821 classic says it all: Confessions of an English Opium Eater. Yes, it’s about addiction, but he uses the subject to explore something ground-breaking for […]

The post Why We’re So Self-Obsessed appeared first on Then & Now.

]]>
One of the best autobiographies ever written happens to be one of the first, one of the darkest, and one of the most creative. The title of Thomas de Quincy’s 1821 classic says it all: Confessions of an English Opium Eater. Yes, it’s about addiction, but he uses the subject to explore something ground-breaking for the time – the inner universe.

We now live in an age of self-obsession. The era of everybody’s autobiography, as Gertrude Stein said. We all have our story, celebrities and politicians sell memoirs, we’re surrounded by reality TV and podcasts about personal growth, we live in a culture of self-development, the age of me.

De Quincy’s story begins with his pains and afflictions – toothache, poverty, hunger, sufferings – that ‘threatened to besiege the citadel of life and hope’, in his words. Crucially, he says that usually ‘Guilt and misery shrink, by a natural instinct, from public notice: they court privacy and solitude’.

His book is a confession because he says that usually people omit the ugly parts of their character and emphasise their success. He wanted to challenge that.

Then, he describes the relief of taking opium for the first time – ‘what an upheaving, from its lowest depths, of the inner spirit! what an apocalypse of the world within me!’

He describes an ‘abyss of divine enjoyment’ and ‘ a panacea for all human woes: here was the secret of happiness, about which philosophers had disputed for so many ages, at once discovered: happiness might now be bought for a penny, and carried in the waistcoat pocket: portable ecstasies might be had corked up in a pint bottle: and peace of mind could be sent down in gallons by the mail coach’.

He uses the experience of his opium addiction to explore, psychologically and innovatively, that ‘abyss within’ – that ‘apocalypse of the world within me’.

This gazing into the abyss within is a mode of self-exploration that is still unfolding culturally. Compare the popular TV shows of this year with those of just twenty years ago. From the Last of Us to the Bear to Succession – these new shows are not what they’re ostensibly about on the surface – zombies, cooking, or business – but are about something much more universal – character. Much of their genius – much like Oppenheimer, for example – relies on these shallow depth of field close ups on the intense emotions displayed on the torn character’s face.

So where did this inward gaze come from? Before de Quincy’s time, we have to remember that knowledge, traditionally, was not about what’s in here – endogenous – but what’s out there –exogenous. Everything from moral rules coming from god, to science coming from studying the world, to art coming from ancient models and classical forms – was about studying and learning from the external world.

There’s a great book on the early Ancient Greeks of Homer’s time that influentially makes the case that the people of that period saw their emotions, passions, angers, and desires not as coming from within but as being placed in them by the gods.

Agamemnon says that Zeus put wild ate – the goddess of mischief and delusion – in him, and made him act in a way contrary to how he usually would – he says the ‘plan of zeus’ was fulfilled.

Other characters talk as if gods had taken away, changed, or destroyed a normal way of thinking and replaced it with another.

In other words, character came not from within but from without. Remember, the Ancients had no concept of personality, of biochemistry, they genuinely believed in their gods. Think about how powerful our imaginations are – why wouldn’t you believe that an intense of experience of anger, say, and a loss of self-control was something planted there by the gods.

To take one more example, in the Christian framework, the self is always in reference to god. St Augustine may have written the first autobiography – The Confessions – in the 4th century, but his aim was to conform his behaviour to the external rules of Christian teaching and god’s will, not to discover some true self within.

The Medieval period was defined by roles you were born into – craftsman, butcher, peasant, lord – the rules were laid down, you didn’t question them. But in de Quincy’s time all of this was changing.

De Quincy was obsessed with two poets: William Wordsworth and Samuel Coleridge – he idolised them, wrote letters to them, befriended them, and travelled here to the English Lake District where they lived. He idolised them because, like other writers of the time like Goethe and philosophers like Jean-Jacques Rousseau – they were all working out a new idea of the self.

Wordsworth, before de Quincy, also wrote one of the first autobiographies – a very long poem called the Prelude about ‘the growth of a poet’s mind’ – all about his childhood experiences. He admitted that, ‘it is a thing unprecedented in literary history that a man should talk so much about himself’.

Wordsworth believed, adopted philosophical ideas in Germany from people like Kant, that our inner life – the framework, structure, ideas, emotions, of the mind and body – shapes the information we get through the senses and patterns it, colours it, transforms and raises it – so that everyone sees a lake, for example, in a different way, with different memories, different goals and ideas. This was radically new.

It all started with Rousseau, who, not long before in the 18th century had a profound realisation. If, as he believed, it was the world, political systems, and social norms around him – those exogenous features – that were oppressing ordinary people, keeping them in chains, where was truth to be found? It could only be found, he decided, within.

Rousseau opened his autobiography – again, one of the first, probably the first, and again titled The Confessions – with this influential passage: ‘I am made unlike any one I have ever met; I will even venture to say that I am like no one in the whole world. I may be no better, but at least I am different’. Because of this he said, ‘I should like in some way to make my soul transparent to the reader’s eye’.

The historian W.J.T. Mitchell even described Rousseau as the first modern man – ‘the great originator’.

Goethe, inspired by Rousseau, said that he turned into himself and found a world. Rousseau said his project was one that had ‘no model’, and would have ‘no imitator’.

It was an act of pure, singular, individual, irrepressible creativity – from within.

This spirit of the age had a profound affect on many writers. Wordsworth wrote so much about this place because it was where he was from. He described the psychological ‘spots of time’ that influenced his individual character.

Personal and local stories from places like this one: Dungeon Ghyll Force – as a place where lambs almost drown because the shepherd’s boys aren’t concentrating – he describes their inner worlds – their ‘pulses stop’, their ‘breath is lost’.

Or when he remembers stealing a boat at night when he was a child, becoming terrified in the middle of a lake at the dark imposing shapes of the mountains around him that haunted him and  ‘moved slowly through my mind / By day, and were the trouble of my dreams’.

De Quincy admired all of this so much that he moved into Wordsworth’s cottage after him and when writing his own autobiography, wrote with his tongue in his cheek that there were ‘no precedents that he was aware of’ for this sort of writing.

But this is what makes De Quincy’s Confessions so innovative. He focuses on what is usually swept away in our own self-aggrandising narratives about ourselves. He says, ‘Nothing, indeed, is more revolting to English feelings, than the spectacle of a human being obtruding on our notice his moral ulcers or scars, and tearing away that “decent drapery”’.

He took Wordsworth’s exploration of emotion, feeling, the self and the natural world and applied it to his own warped experiences with opium and urban life.

He describes how opium furnished ‘tremendous scenery’ in the dreams of the eater. He writes poetically about the ‘endless self-multiplication’ of the self going up and down symbolic staircases, and talks about the ‘wonderous depth’ within.

He uses metaphors like translucent lakes and shining mirrors and waters and oceans changing, surging, wrathful, to describe the changes in our own inner lives. ‘My agitation was infinite,’ he said ‘my mind tossed – and surged with the ocean.’

Of course, this new inner self didn’t appear from nowhere. It mirrored the scientific developments of the period. When astronomers like Galileo made observations about the universe that contradicted that taught wisdom of the Bible, all of these poets and philosophers were saying: ‘what about the universe within?’

Wordsworth’s spots of autobiographical time, de Quincy’s artistic description of personal challenges and addiction, Lord Byron’s model of heroic outcasts on voyages of self-discovery – all provide the groundwork for modern psychology, the modern self, and for the core injection of the modern world: create yourself as something new.

The autobiographical self is the model that helps us traverse the world. As psychologist Qi Wang says, the autobiographical self is, ‘self-knowledge that builds upon our memories and orients us toward the future, allowing our existence to transcend the here-and-now moment’.

This had an incalculable effect on the culture and politics of the modern period. These writers were a sensation across Europe and anyone who argued for individual rights, freedoms, the power of ordinary people, drew on them in some way.

On the other hand, I’s produced the narcissism and obsessions with the self we see everywhere today. We no longer look outward as much, but spend a lot of time naval-gazing in.

I think the challenge of this century will be whether the world within can be reconciled with the world without.

The post Why We’re So Self-Obsessed appeared first on Then & Now.

]]>
https://www.thenandnow.co/2024/01/28/why-were-so-self-obsessed/feed/ 0 1037
CONSPIRACY: The Fall of Russell Brand https://www.thenandnow.co/2023/12/05/conspiracy-the-fall-of-russell-brand/ https://www.thenandnow.co/2023/12/05/conspiracy-the-fall-of-russell-brand/#respond Tue, 05 Dec 2023 12:31:19 +0000 https://www.thenandnow.co/?p=949 On the surface, this is a story about Russell Brand, but it’s also a bigger story – about institutions, trust, truth, uncertainty and fear, coverups and questioning, about how we all think. It delves into the most fundamental of human questions – what are the stories we tell ourselves? Who gets to tell those stories? […]

The post CONSPIRACY: The Fall of Russell Brand appeared first on Then & Now.

]]>
On the surface, this is a story about Russell Brand, but it’s also a bigger story – about institutions, trust, truth, uncertainty and fear, coverups and questioning, about how we all think. It delves into the most fundamental of human questions – what are the stories we tell ourselves? Who gets to tell those stories? What is the truth?

Russell Brand’s career as an entertainer was based on promiscuity, shock, extravagance, wit, intelligence – but that’s the case with so many comedians. In 2013 Brand did something most comedians don’t – he talked to Britain’s chief MSM political interrogator – Jeremy Paxman.

He said: ‘here’s the thing that we shouldn’t do: shouldn’t destroy the planet; shouldn’t create massive economic disparity; shouldn’t ignore the needs of the people. The burden of proof is on the people with the power’.

Brand told the incredulous Paxman that he didn’t vote, what’s the point? It went viral at the time, not least because Brand is one of Britain’s most recognisable faces, but because it seemed to many people to capture the mood: an ordinary person, telling the truth, up against the establishment.

It was the start of a shift towards politics.

In 2014, after a stint in Hollywood, he wrote a book – Revolution – which he discussed in another much talked about interview on Newsnight in the UK.

The same year, he started The Trews on Youtube, reading and commenting on the UK newspapers, interviewing a range of people, making mostly progressive arguments.

Of course, for many the pandemic changed things. In January of 2021, one subject stands out as getting millions more views than usual – The Great Reset.

It’s a topic Brand comes back to several times, and these videos always have many more views than most others. Around this time, Brand becomes more critical of policies surrounding Covid-19 – much of it reasonable. He shifts to his current, regular format – Stay Free – a full time regular show with millions of subscribers, advertisers, co-presenters, and guests.

A few months later, in the middle of 2021, stories began being published which were mostly drawing on Brand’s tweets: Brand ‘is a conspiracy theorist’.

By 2022, two competing narratives are set: for many, Brand had become a crackpot tinfoil hat conspiracist. For Brand, that there’s a centralising, authoritarian, mainstream agenda, dominated by MSM, the political establishment, big tech, and global corporate interests, to take away our freedoms.

I want to look at several stories as they unfolded – Covid and vaccines, the Great Reset, the Dutch farmers protest, and the allegations against Brand in September of 2023 – and ask a question that I think is fundamental to our information age: what does it mean to be called a conspiracy theorist? Especially if there have been real conspiracies in history – Iran-Contra, Watergate, the Pentagon Papers, the Holocaust, all the way back to Julius Caesar’s assassination. All of these were the result of a conspiracy.

We’ll look at the history and the psychology too to try and separate fact from fiction, asking what drives Brand? Is there any truth to what he says? How can we think about the establishment, the mainstream media, the global elite – what does all of this tell us about the society we live in?

 

Contents:

 

The Great Reset

Brand talks about lots of different subjects, in a lot of different ways, but there are a few themes and topics he comes back to again and again.

He is aware he’s been framed as a conspiracy theorist, and frequently points out that he’s just reading facts from a variety of sources, some mainstream – the Guardian, the New York Times, the Washington Post – some more fringe. So how can we disentangle fact from fantasy?

At the beginning of 2021, clips circulated on the World Economic Forum’s Great Reset.

‘The Great Reset’ is an initiative from the World Economic Forum to drastically change the direction of the economy after Covid-19 by addressing social issues and, ‘to reflect, reimagine, and reset our world’.

Depending on who you ask, The Great Reset is anything from capitalist propaganda, to a genuine attempt to address the problems with capitalism, to a global conspiracy to exert more control over the population.

In the case of Brand, he argued, ‘there are some people that believe in shady global cabals running things from behind the scenes. Now, I don’t believe that, I believe that there are plain visible economic interests that dominate the direction of international policy’.

The video is reasonable. It criticises those who think the Great Reset is part of an authoritarian plan to take control through the justification of a manufactured fake climate crisis, for example.

The video is a hit, it has a million views, compared to his other videos of the time ranging around 100,000.

Brand makes another video, saying he’s decided to, ‘dive a little bit deeper into what you think, and further evidence, and your legitimate concerns’. The video currently has 2.7 million views.

A year later, he says: ‘bad news, the Great Reset, where you will own nothing and be happy, is being brought about by economic policy decisions made by your government that will facilitate the advance of the most powerful interests on earth’.

Brand continues that: ‘this is not conspiracy theory, I’m going to read you the actual facts here, I’m just using rhetoric that’s appealing, I’m an entertainer’.

Okay, so what is the Great Reset? It began as a book written by the founder of the corporate lobbying group the World Economic Forum, Klaus Schwab, and his co-author, Thierry Malleret.

One review describes three main themes:

  1. A ‘push for fairer outcomes in the global market and to address the massive inequalities produced by global capitalism’.
  2. ‘efforts to address equality and sustainability by urging governments and businesses to take things like racism and climate change more seriously’.
  3. Embrace ‘innovation and technological solutions [that] could be used to address global social problems’.

All of this sounds reasonable enough, admirable even. But, as political author Ivan Wecke points out, Schwab and the WEF’s ideas have something ‘fishy’ about them. The initiative can be seen as an exercise in corporate PR that gives multinational business leaders more power, not less, and that give political elites more power, not less. In another review of the book, Steven Umbrello concludes that the book does point out a lot of problems, but has no substantive solutions. And, of course, liberal elites love this stuff. Trudeau, for example, has used the language of needing a ‘reset’.

So there’s plenty to criticise. But as Brand explores the Great Reset, he connects it to other events – Black Rock buying up houses, for example. Emphasising one video ominously claiming that in the future you’ll own nothing and be happy, increasing government restrictions during the pandemic, Bill Gates, and the Dutch farmers protest.

He seems more aligned with Alberta premiere Jason Kenney, who has claimed the great reset is a ‘grab bag of left-wing ideas for less freedom and more government’, and, ‘failed socialist policy ideas’.

Brand uses the word ‘agenda’ frequently, and as he says, it’s not a conspiracy, he’s just reading the facts. So what is a conspiracy?

One definition is: ‘the belief that a number of actors join together in secret agreement, in order to achieve a hidden goal which is perceived to be unlawful or malevolent’ (Zonis and Joseph).

Another by professor of psychology Jan-Willem Prooijen argues a conspiracy has 5 components:

  1. It makes connections that explain disconnected actions, objects, or people into patterns
  2. It argues that it was an intentional plan
  3. It involves a coalition or group
  4. The goal is hostile, selfish, evil, or at odds with public interest
  5. It operates in secret

Another definition proposes a simple four criteria model: ‘(1) a group, (2) acting in secret, (3) to alter institutions, usurp power, hide truth, or gain utility, (4) at the expense of the common good’.

There also many types of conspiracy – within government and institutions, without, in the form of another country or nefarious power, above in the form of shady elites, or even below in the form of ordinary people overthrowing capitalism.

By Prooijen’s criteria, the Great Reset can be thought of as a conspiracy. After all, it’s intentional, it involves a group of people, some argue it’s not in the public interest, and it at least in part operates in secrecy at Davos. But Brand points out that it’s not a conspiracy, they’re saying it publicly: https://youtu.be/BXTPzFSx6oI?si=xMQIZ7u4xFfNYEc7&t=213

But I think the most interesting component is the first one – it makes connections that explain disconnected actions, objects, or people into patterns. Brand does this often, hopping between topics. So let’s look at another one topic, the Dutch farmers protest.

 

The Dutch Farmers Protest

The Dutch Farmer’s movement, beginning in 2019 and continuing today, are protests that argue that farmers are being unfairly targeted in efforts to address climate change.

The Dutch government have passed a range of policies aiming to cut nitrogen pollution and livestock farming in the country.

Brand says, ‘Bloody farmers, protesting, hating the environment. What is it? Are farmers all bastards? Or, are we seeing the beginning of the Great Reset play out in real time?’.

In short, the Dutch government policies are a power grab, taking power from ordinary farmers, and he connects the protest to other stories he covers frequently – the Great Reset, WEF, Bill Gates, and the MSM failing to cover the events appropriately.

Remember: ‘1: It makes connections that explain disconnected actions, objects, or people into patterns’.

So what’s really happening in the Netherlands?

Studies since the 80s have shown that nitrate pollution in the ground, getting into drinking water, and into wider ecosystems, has been an increasing problem. Nitrate pollution can cause blue baby syndrome, increases in bowel cancer, respiratory problems and premature birth.

It causes havoc in rivers, which nitrate-based fertiliser runs into, killing fish. The EU has identified Natura 2000 areas – fragile areas of nature that are home to rare and threated species.

The Netherlands is an agricultural superpower. It’s the second largest exporter of agricultural products in the world, and the EU’s number one exporter of meat. Unfortunately, being close to the designated Natura 2000 areas, this makes nitrate pollution in the Netherlands a big problem.

It’s also an EU member state – with its freedom of movement, courts, European Parliament, and so on, and support for staying in the EU in the Netherlands is very high – around 75%.

The EU has legislated to reduce nitrate pollution by 2030 and more broadly, worldwide, agriculture contributes to between a quarter and a third of all greenhouse gas emissions.

The Dutch government and EU have agreed a 1.5 billion euro package to help 2000-3000 “peak polluter” farmers either innovate, relocate, change business or, as a last resort, buy them out.

Obviously, among many farmers this is deeply unpopular.

‘For agricultural entrepreneurs, there will be a stopping scheme that will be as attractive as possible’, said Van der Wal in a series of parliamentary briefings. ‘For industrial peak polluters, we will get to work with a tailor-made approach and in tightening permits. After a year, we will see if this has achieved enough’.

Is it hypocritical to focus efforts on ordinary farmers rather than industrial peak polluters? On the surface, yes. And none of what I’ve just said is to blame farmers. But it’s obviously a complex problem with a lot of different interests at stake.

And in the middle of the video, Brand makes some reasonable points. In Sri Lanka the outright banning of all fertilisers and pesticides was disastrous. He makes points about focusing on farmers rather than finding ways to shift attention to corporations and the one percent. He says it’s always ordinary people rather than the powerful. All of which I can agree with. But he ignores some of the complexity. The Dutch government has also ordered coal powerplants to close, for example. And the biggest polluter in the country is Tata Steel, which the regulation does focus on, and is one of the country’s biggest employers of ‘ordinary people’.

But what stands out is the framing. It’s about the Great Reset, Bill Gates, the agenda, and the next piece of the puzzle…

 

COVID-19

There are several ongoing Covid-19 debates. The lab leak hypothesis, the efficacy of vaccines, big tech censorship, the legality or ethics of ‘lockdowns’ – and what should be clear, wherever you stand on a particular issue, is that each of these, while having some crossover, is somewhat different.

Some of the Brand’s many Covid-19 videos, like one on ‘vaccine passports’ for example, have a lot to agree with. However, like with other topics, Brand has a tendency to take a story and spin it into a wider pattern.

We hear it a lot recently – it’s about ‘the narrative’.

The lab leak hypothesis isn’t about laboratory safety precautions or lack thereof, but about a coverup involving world government, the WHO, and big tech censorship. A WHO epidemic surveillance network across the world that monitors the outbreak of communicable disease becomes about an elitist surveillance society that spies on us. A doctor describing helping with outbreaks becomes an object of derision.

Take this video, one of many on vaccines. It’s about Pfizer falsifying the data of vaccine trials – a serious issue. It’s based on a BMJ article in which a whistleblower raised a number of concerns with a trial site they worked at, including:

‘1. Participants placed in a hallway after injection and not being monitored by clinical staff

  1. Lack of timely follow-up of patients who experienced adverse events
  2. Protocol deviations not being reported
  3. Vaccines not being stored at proper temperatures
  4. Mislabelled laboratory specimens, and
  5. Targeting Ventavia staff for reporting these types of problems’

All worrying concerns. And Brand repeatedly points out that he is just looking the evidence objectively, just asking questions. He describes himself as a ‘glass funnel’ reporting information carefully and unbiasedly, while the MSM report it ‘morally’, telling people what to do.

There are a few points of irony here. First, obviously Brand has a moral position here. We all do – unless we read a story without comment or opinion, which Brand is doing. Second, he says it’s not being reported on by the mainstream media, while using reports from mainstream institutions – the BMJ, CBS, and it’s been reported by the Daily Mail and the Conversation. I find a brief reference to it in the Financial Times.

But, it might be reasonable to ask, should there not be more of an outcry? I can’t find it reported in the New York Times or the BBC, for example.

As the Conversation article points out, the concerns raised are important and worrying but don’t meaningfully undermine wider evidence on Covid-19 vaccines. It involved three Pfizer trial centres out of 150. Those 3 sites involved around 1000 people.

Of course, across the world, hundreds of thousands took part in trials involving many different pharmaceutical companies, third-party trial centres, universities, and hundreds of regulatory bodies.

And most of the whistleblower’s complaints were about sloppiness – photos of things like needles thrown away inappropriately, participants’ IDs left out when they shouldn’t have been. One section reads, ‘a Ventavia executive identified three site staff members with whom to “Go over e-diary issue/falsifying data, etc.” One of them was “verbally counseled for changing data and not noting late entry,” a note indicates.’

Now, all of this is obviously worrying, good reporting, worth investigating, et cetera.

But it’s important to keep a sense of proportion. This is a single third-party trial centre in Texas, but Brand spins it into a wider narrative, claiming in another video, for example, that, ‘the mainstream media are preventing their own medical experts from accurately reporting on potential covid problems. Meanwhile, they continue to repress information about vaccine efficacy’.

As Prof Douglas Drevets, head of the infectious diseases department at University of Oklahoma has written: ‘There have been so many other studies of the Pfizer COVID-19 vaccine since the Phase III trial that people can be confident in its efficacy and safety profile. That said, Pfizer might be wise to re-run their analysis excluding all Ventavia subjects and show if that does/does not change the results. Such an analysis would give added confidence in the Phase III results’.

Pfizer then reported that they looked into the complaints and said that, ‘Pfizer’s investigation did not identify any issues or concerns that would invalidate the data or jeopardize the integrity of the study’.

I’m not saying Pfizer’s claims should be taken at face value, or that pharmaceutical companies do not have perverse profit incentives, and so on, or that this isn’t worth someone digging into – the point is that this is a very small story, it has been looked into, and I’d imagine if you’re an editor at a TV station or newspaper, with hundreds of other competing stories to present, you’d decide on balance that there are more important stories. News reporting is a matter of emphasis. With only a limited number of positions on, for example, a front page each day, what’s included and what’s not?

Brand says that the mainstream media are censoring information when in fact the opposite is true. There are, again, issues with the mainstream media that we’ll come to, but it’s an endorsement of the press that, unlike in say China or Russia, a relatively minor issue could be reported and investigated.

Brand constantly says things like, ‘this is what happens when you politicise information’, without the awareness that by weaving insignificant details into wider narratives, deciding to give small stories weight, he is himself obviously politicising information.

The whistleblower was also reported to be a sceptic about vaccine efficacy more broadly. Brand also relies on jokes as innuendo to spin it into his wider conspiracy narrative – joking, for example, that the whistleblower was found dead.

He says, ‘individual freedom, individual ability to make choices for yourself, based on a wide variety of sometimes opposing evidence, and sometimes contradictory information, that places you in the position as an adult to make decisions for yourself. That’s not what the mainstream media want, but that’s what we demand on your behalf’.

But he doesn’t use a wide variety of evidence. He selects minor stories and connects them to the narrative. There are many, many, many more sources that report things like vaccines have saved three million lives in the US alone. 96% of doctors are fully vaccinated. Myocarditis has been reported in ten out of a million shots of the vaccine, but is more likely to be caused by the Covid-19 virus than the vaccine.

There are debates to be had, there always are, but what Brand doesn’t have is a good sense of the weight and significance of a story. And what he does have, as we’ll get to, is a good sense of how to tell a compelling, scary and entertaining story.

But wait, just because it’s a small story it doesn’t make it automatically wrong. And yes, there are monied interests, powerful lobbies, values and ideas that are dominant and others that get sidelined. The risk is throwing out the baby with the bath water. And as we saw at the beginning of the video, some conspiracies turn out to be true, and they weren’t reported on either. So is there any other way to separate fact from fiction?

 

History and Conspiracy

History is full of conspiracies, but they tend to be limited – a small group of people with a limited set of goals.

Most theories, though, have turned out to be wrong, or at the very least, there’s little evidence for them. Vaccines don’t cause autism. Obama was born in the US. The earth is not flat. Witches weren’t conspiring to encourage the harvests fail. Jews weren’t conspiring to take over the world in Weimar Germany.

But the idea that there is an agenda to take over the world, an idea that connects dots between disparate events is as old as time – and they’ve usually turned out to be wrong, or at least, as we’ll get to, miss the real point.

In the middle of the 19th century, it was a common belief in America that the Catholic Church and the monarchies of Europe were not only uniting to destroy the US, but had already infiltrated the US government. One Texas newspaper declared that, ‘It is a notorious fact that the Monarchs of Europe and the Pope of Rome are at this very moment plotting our destruction and threatening the extinction of our political, civil, and religious institutions. We have the best reasons for believing that corruption has found its way into our Executive Chamber, and that our Executive head is tainted with the infectious venom of Catholicism’.

Before that, it was the Illuminati, who, according to one book in 1797, were formed, ‘for the express purpose of ROOTING OUT ALL THE RELIGIOUS ESTABLISHMENTS, AND OVERTURNING ALL THE EXISTING GOVERNMENTS OF EUROPE’.

In an influential 1964 article, The Paranoid Style in American Politics, Richard Hofstadter points out that throughout history there have been suspicions of plots that have infected all major institutions, a fifth column, that all in power have been compromised.

The inventor of the telegraph, Samuel Morse, wrote that, ‘A conspiracy exists, its plans are already in operation… we are attacked in a vulnerable quarter which cannot be defended by our ships, our forts, or our armies’.

Morse, sounding just like Brand, wrote: ‘The serpent has already commenced his coil about our limbs, and the lethargy of his poison is creeping over us.… Is not the enemy already organized in the land? Can we not perceive all around us the evidence of his presence?… We must awake, or we are lost’.

Another article worried that, ‘that Jesuits are prowling about all parts of the United States in every possible disguise, expressly to ascertain the advantageous situations and modes to disseminate Popery’.

It was alleged that the 1893 depression was the result of a conspiracy by Catholics to attack the US economy by starting a run on the banks.

WWI was started because the Austo-Hungarian Empire believed the killing of the heir to the throne Archduke Franz Ferdinand was the result of a Serbian government conspiracy, and so attacked Serbia, setting off a chain of events leading to the war. There was no evidence for this. Historian Michael Shermer calls it the deadliest conspiracy theory in history.

Senator McCarthy famously believed a communist conspiracy had infiltrated every American institution. In 1951 he said that there was, ‘a conspiracy on a scale so immense as to dwarf any previous such venture in the history of man. A conspiracy of infamy so black that, when it is finally exposed, its principals shall be forever deserving of the maledictions of all honest men’.

During the resulting Red Scare, influential businessman Robert Welsch wrote that, ‘Communist influences are now in almost complete control of our Federal Government’, the Supreme Court, and that they were in a struggle for control of, ‘the press, the pulpit, the radio and television media, the labor unions, the schools, the courts, and the legislative halls of America’.

 

The Psychology of Conspiracy

One of the important distinctions here is between phrases like an ‘agenda’ and ‘conspiracy theory’. Brand, while defending himself as not being a conspiracy theorist, tends to use terms like ‘agenda’, ‘they’, and ‘the global elite’. The difference is between purposeful collusion across institutions and a pattern of say, certain values aligning between corporations and neoliberal politicians. Sometimes this is a gradient more than black and white, but another way we can untangle this is to look at studies about who believes in conspiracies and for what reasons.

Firstly, a lot of people believe in them. One third of Americans believe Obama is not American. A third that 9/11 was an inside job. A quarter that covid was a hoax. 30 percent that chemtrails are somewhat true. 33% believe that the government are covering up something up about the North Dakota crash.

Never heard of it? That’s because researchers made it up. They polled people about their beliefs in conspiracies and included a completely made up event in North Dakota, and people instinctively believed that the government was hiding something about it.

People are naturally suspicious of power, which is of course a good thing, but for some people that leads to belief in a conspiracy. Why?

There are several factors that psychologists have looked at. The first is uncertainty. Psychologist Jan-Willem Prooijen points out that at a fundamental level, conspiracy theories are a response to uncertainty.

He writes: ‘Conspiracy theories originate through the same cognitive processes that produce other types of belief (e.g., new age, spirituality), they reflect a desire to protect one’s own group against a potentially hostile outgroup, and they are often grounded in strong ideologies. Conspiracy theories are a natural defensive reaction to feelings of uncertainty and fear’.

Responding to uncertainty and fear by hypothesising a threat is an evolutionary instinct. You’re better off jumping at the sight of a stick in the long grass than be bitten by a snake. The same thing happens when we see shapes in the darkness. We are risk calculating creatures, always on the watch for danger.

And we do this by looking for patterns. Jonathan Kay writes that, ‘Conspiracism is a stubborn creed because humans are pattern-seeking animals. Show us a sky full of stars, and we’ll arrange them into animals and giant spoons. Show us a world full of random misery, and we’ll use the same trick to connect the dots into secret conspiracies’.

Psychologists call it pattern perception. I like to call it patternification.

Prooijen writes, ‘pattern perception is the tendency of the human mind to “connect dots” and perceive meaningful and causal relationships between people, objects, animals, and events. Perceiving patterns is the opposite of perceiving randomness’.

Again, all very reasonable. But sometimes the stick in the grass is just a stick. And sometimes an event is just random, meaningless, an accident, a result of incompetence, ignorance, and so on.

Prooijen writes, ‘Sometimes events truly are random, but most people perceive patterns anyway. This is referred to as illusory pattern perception: People sometimes see meaningful relationships that just do not exist’.

We all do it all the time. But what’s interesting in research is that some people see patterns more readily than others.

In studies, people who see patterns in abstract paintings, random dots, or coin tosses, were more likely to believe in conspiracy theories, paranormal phenomena, and be religious. People who believe in astrology, spiritual healing, telepathy, communication with the dead, are all more likely to believe in conspiracy theories. Belief in conspiracies have also been shown to increase after natural disasters.

Threat leads to the formation of a belief in a pattern in response to that threat.

In many – by no means all, but many – of Brand’s videos, small stories, a small sample of data, a single piece of evidence, are spun into a wider pattern.

In this video, he links the Great Rest and the WEF’s video, ‘you’ll own nothing and be happy’, to movements in the financial markets, for example – a story about Black Rock buying up real estate.

It’s all part of the agenda. He throws in that the mainstream media reports it as ‘good news’ – a housing bonanza that’s going to great for everyone – insinuating journalists are part of the agenda, ignoring the irony that he’s citing the New York Times.

What’s it got to do with the great reset? I honestly couldn’t tell you. I wonder if bitcoin.com – Brand’s source – has an agenda?! In this video, the great reset is linked to the farmers protests. Throw in Bill Gates, vaccines and it all becomes part of the simple good vs evil narrative.

Author Naomi Klein describes it as a ‘conspiracy smoothie’.

She writes, ‘the Great Reset has managed to mash up every freakout happening on the internet — left and right, true-ish, and off-the-wall — into one inchoate meta-scream about the unbearable nature of pandemic life under voracious capitalism’.

Conspiracy theories become, through patternification, totalisers. Everything gets lumped in together as part of the same single narrative. It becomes zero sum, good vs evil analysis. But this doesn’t answer why some people do this and others don’t, nor does it answer when the dots should be connected. To see why people do this, we’ll look at two categories: cognitive biases and the need for control.

 

Cognitive Biases

Studies have shown that education at high school halves the tendency to believe in conspiracies, from 42% to 22%. Why is this?

It’s kind of counter intuitive, because in many ways education actually teaches you, more than anything, to be sceptical. The scientific method, for example, is built on scepticism of received wisdom. In history, you’re taught to be sceptical of and scrutinise the literature and sources. In politics, many approaches – liberalism, Marxism, poststructuralism, and more – are, at their core, sceptical about the state and institutional power. If you’re sceptical about what you’re told, surely you’re more likely to believe that something is going on behind the scenes.

Except, while scepticism is key, education also teaches you to draw on evidence, being led by evidence as much as possible – and importantly, all of the evidence.

If you only draw on bitcoin.com to make a case you wouldn’t get far. Which is why most undergraduate essays or dissertations and papers to submitted journals require a literature review – show that you’ve assessed and understand the literature, identified weaknesses, made an argument.

In fact, the very basis of the modern scientific method in both the hard sciences and the social sciences and humanities is peer review – you must reference, show you understand the evidence, cite sources in a bibliography, show how the studies can be rerun and submit it to a body of peers to check over the work. This idea – that work is checked and can be responded to – runs through the heart of institutions.

We rely on the work of others, we build upon it, we respond to it. It has its limits, it’s often biased, it’s middle class, it can be wrong, subdisciplines are at loggerheads, criticise one another, but, that’s precisely what makes it work – it’s tentative, it’s open to critique, and it can be checked, it’s how knowledge is built up communally. We’ll come back to its benefits and limits.

Another mistake conspiracy theorists make is proportionality bias: that a large effect needs a large cause to create a sense of ‘cognitive harmony’ – a balance between two ideas.

JKF couldn’t have been killed by a lone assassin, he was the president of the US. Princess Diana couldn’t have been randomly killed in a car crash, it must have been the royals. 9/11 couldn’t have been the result of 19 guys from the Middle East, it must have been the government.

We’re all human, including presidents. But if a US president and your neighbour Ned both died randomly on the same day – which one would there be a conspiracy about?

In one study, two groups were told two different stories about a president of a small country being assassinated. One group were told the assassination led to civil war, in another it doesn’t. People were more likely to believe the assassination was a conspiracy if it led to a war.

Prooijen says the proportionality bias is that ‘a big consequence must have had a big cause.’

There are some other biases. Tribalism leads us to protect our in-group, divide the world into us vs them, good vs evil. Another is the intentionality bias, that leads us to believe that the negative effects of other’s actions were intentional, whereas if we did them it would be an accident or we’d have good reason. Every banker is evil, our own pension fund is necessary. Or in the form of Hanlon’s razor: ‘never attribute to malice that which is adequately explained by stupidity’. A politician does something that we perceive to be evil, really they just don’t understand the topic, and so on.

So there biases of thinking that we all make, and I think in many ways they can be summed up in the way Brand thinks about the mainstream media.

 

The Mainstream Media Agenda

I think combining these fallacies and thinking about the way Brand takes a small story – like the Pfizer data falsification story – and turns it into a global elite agenda, gives us a good frame to think about Brand’s critique of the mainstream media.

It’s almost always part of the narrative, and even more so since the accusations against him in September.

The mainstream media have lots of problems – they’re diverse problems – not least of which that they tend to be close to elites, institutionalised, cozy with politicians, centred in and overly focused on places like Washington, London, and New York, and have financial interests. The list goes on.

But to paint hundreds of thousands of journalists in the US and UK alone as part of an agenda is not only naïve, it’s dangerous.

Firstly, large media institutions could not get away with relying on small stories to construct speculative narratives like Brand does. They are always going to be led, for good or bad, by the dominant body of evidence available. If 99% of scientists believe that the vaccine is safe and effective, the BBC is going to report it that way. That’s what you get. Media literacy is to read the news widely, know an institution’s biases, and read elsewhere too.

Second, the surge in independent media is a great thing – you’re watching it, now – and obviously I’m an enthusiast. However ‘independent’ does not automatically mean authentic, unbiased, ‘giving the voiceless a voice’, ‘free’, or any other of the superlatives you often hear. Independent media largely rely on stories investigated and first reported by the same mainstream media they go on to criticise. Brand does this all the time. ‘Independent’ media rarely have the budget to execute years-long investigations, report from warzones, get access to archives and data quickly, get to the scene of a disaster or protest while it’s happening. Media institutions are important for this very reason. We need institutions with the budget and connections to do these things. Compare this to Brand reading from bitcoin.com.

Third, to paint everyone in the mainstream media in the same way is to ignore that the media is made up of millions of people around the world doing work passionately, carefully, with varied opinions and interests. To frame the mainstream media as monolithic, and use language like us vs them, is dangerous.

Brand paints anyone who is part of the ‘narrative’ as stooges for a centralising corporate and government agenda to take away your freedom. As any cursory look at a textbook on propaganda will show, that’s not how influence works in authoritarian countries, let alone liberal ones.

Brand relies on a top down model of propaganda in which power and money directs information, education, news, and opinion downwards through the press and the schools into the minds of a mindless population.

But as Jonathan Auerbach and Russ Castronovo point out in their introduction to the Oxford Handbook on Propaganda, propaganda is not total, even in totalitarian regimes. Persuasion by information is much more complex. They write, ‘people consume propaganda, but they also produce and package their own information just as they also create and spin their own truths.’

If you think the mainstream media are just propagandists then I implore you to just look at the facts of any of these issues. One million people have died from Covid in the US alone. And vaccine hesitancy has been estimated to have led to 300,000 preventable deaths. That’s a study from Brown, Harvard, the New York Times, and more. If you think the mainstream media are just propagandists, take a look through the Pulitzer Prize nominees at the investigations of the past year.

Again, I’m not saying that there aren’t many, many criticisms to be made. And that obviously the mainstream media are de facto in the centre, and that collective, radical, and socialist solutions or candidates will never get a fair shout and that lobbying and money will always delegitimise solutions that don’t align with their interests, and that supporting independent progressive media is crucial to countering that. But none of these criticisms paints the mainstream media as monolithic, evil, propaganda. It’s simplistic, it’s dangerous, it’s wrong, and as we’ll see, it’s often about narcissism, control, and in many cases outright lies.

 

The Recent Allegations and Rumble

In September of 2023, Channel Four and the Telegraph in the UK released an investigation into Brand that included allegations of sexual assault and rape. The day before, Brand posted a video denying the allegations.

What happened next, for many, seemed to prove Brand’s point. The media focused its attention on Brand, countless articles were written, news items broadcast, investigations launched at the BBC. He was dropped by his agent, a tour was cancelled, Youtube removed advertising from his account so he could no longer make money from it, a charity he did work for cut ties, and on and on.

One platform stood firm – Rumble – and a letter from a UK MP asking whether Rumble was going to stop Brand earning money was ridiculed and criticised by many, including Rumble, who said in an open letter: ‘We regard it as deeply inappropriate and dangerous that the UK Parliament would attempt to control who is allowed to speak on our platform or to earn a living from doing so’.

Inevitably it became a story about a story. Free speech, cancel culture, the establishment, the agenda.

There are, again, reasonable debates to be had here. I for one am not sure Youtube should have taken a stance based on allegations alone, no matter how strong. But a week or so after the allegations, the ‘I’ in the UK ran a story about ads on Brand’s Rumble channel. One was from the Wedding Shop, who told them: ‘We are on the phone right now to our agency to ascertain which of these networks is showing our ads on Rumble so that we can actively remove ads from the platform… It goes without saying that we would not be happy to be featured on Russell Brand’s videos’.

They continued: ‘We use a media agency to spend our advertising budget and we have never chosen to advertise on Rumble, which must be part of the Google, Bing or Meta ad network. Where our ads are placed is not something we generally control – it would be for Google, Bing or Meta to decide whether or not to include or exclude particular platforms’.

It also reports that several companies including Burger King, Xero and Fiverr have stopped their ads running on Rumble. The stories are all similar.

A Fiverr spokesperson said, ‘These ads have been removed and our partners and teams have been alerted to ensure this doesn’t happen again. (We have excluded his channel on both YouTube and on Rumble.) We take brand safety and ethical advertising placement seriously, and we do not condone or support any form of violence or misconduct’.

A toy manufacturer said something similar.

In 2017, Youtube went through something called the ‘adpocalypse’. Advertisers pulled out of Youtube en-masse, when they realised that their ads were being played in front of videos that were accused of being anti-Semitic, homophobic, or just ‘scammy’.

All of this points to an obvious conclusion. Charities, agencies, advertisers, and institutions would prefer not to be linked with someone accused of sexual assault and rape – it’s not great PR. Youtube, in particular, has to balance between supporting creators and attracting advertisers, and so the middle ground is to limit ads on videos that advertisers are likely to pull out of, before the advertisers pull out of Youtube.

Of course, for Brand, this quickly became part of the agenda. In a Rumble video he criticises something called the Trusted News Initiative and argues that the mainstream media are targeting independent media in an attempt to control the narrative.

The Trusted News Initiative is an effort by many media organisations to counter fake news, false reports, viral disinformation, and so on. Not dissenting opinions, but purposefully false information, which studies have shown get shared six times as much as real news on sites like Facebook. Fake stories like this one: ‘Ilhan Omar Holding Secret Fundraisers with Islamic Groups Tied to Terror’, which got shared 14k times on Facebook alone.

Brand argues that, ‘plainly the TNI has an agenda, an explicit agenda to throttle and choke independent media’.

He uses a story from Reclaim the Net that focuses on a lawsuit filed in the US by Robert F. Kennedy Jr. that claims that dissenting views are being stamped out unconstitutionally by the TNI, violating freedom of speech and anti-trust laws.

It’s a minor story from nine months ago, but it’s useful for Brand because it supports his main point: he’s under attack.

Not only does he rely on a single fringe source to tell a biased story, he either lazily or wilfully distorts it. He reads from parts of the article, then at the end says again that, ‘plainly the TNI has an agenda, an explicit agenda, to throttle and choke independent media’.

But he’s completely distorted the language that even he’s just read a second ago. Again, there may be legitimate concerns about this, but if you look at the lawsuit, available online, filed to the district court, the so-called ‘explicit agenda’ is to find ways to ‘throttle’ and ‘choke’ false news stories. The comments about independent media are separate, and even these are misquoted.

The quote Brand reads out is from Jamie Angus at BBC News, saying: ‘Because actually the real rivalry now is not between for example the BBC and CNN globally, it’s actually between all trusted news providers and a tidal wave of unchecked [reporting] that’s being piped out mainly through digital platforms. … That’s the real competition now in the digital media world’.

This is a misquote. Both Brand and RFK and others reporting this uncritically have conveniently left out the parts of the quote that dilute their point. Anyone can watch the clip, it’s linked below. He actually said that the divide is between all trusted news providers and a tidal wave of unchecked, incorrect, or in fact, explicitly malicious, nonsense, specifically to destabilise regions of the world’.

How Brand has framed this is an outright lie.

The context is not only left out, its manipulated. The entire discussion is about how much newsrooms need to do now to verify the vast amount of information they’re dealing with; how much newsrooms of changed and the challenges they face; how many more technicians and specialists are required. He’s talking about wars, verifying whether a tiktok from Ukraine is manipulated or useful evidence, employing specialists in things like geolocation verification, using satellite imaging to understand bombings He even praises ‘citizen journalism’ and talks about opening up the news ecosystem – It’s an interesting watch. Brand and his like have to do none of that difficult work. Not only that, but they rely on it, use it, feed off it, while denigrating the many ordinary people who make it possible.You might say, well Brand is just one person, he is just an ‘entertainer’, he’s just commenting on articles and news, not producing it, it’s not his responsibility to fact check every story. And that’s precisely the argument Brand makes too.But if I – with a budget of almost nil can quickly check a source – then maybe Stay Free with Russell Brand might also do a bit of work. I’m not saying they should have a newsroom of fact-checkers, specialists, and technicians sifting through every claim, but with the following, net worth, and status he has, he clearly has the budget to do due diligence, to check sources, to not misrepresent. With a channel that large you have a clear moral duty to. Instead, the laziest and most entertaining interpretation comes first; laziness fosters conspiracy because thoroughness exposes the truth.

I’m not saying we shouldn’t be very concerned with big tech being in control of what can and can’t be said. I disagreed with them taking down clips and interviews about vaccines and Covid. I think big tech platforms should be committed to freedom of speech.

But Angus is talking about genuine floods of disinformation, propaganda machines, Russian bot farms, designed to lie to people. And he’s right. Whatever the dangers and criticisms, I think it would irresponsible of the mainstream media not to think carefully about this. It took me a few minutes to search through the court document, watch the clip, to see that Brand and his source had either willingly or lazily misquoted the source so as to spin it into their own narrative, combine it with another quote to make it seem more malicious, and in Brand’s case use it to defend against accusations from many ordinary women of sexual assault. And if that doesn’t make you angry, I think it should.

 

Narcissism News Entertainment

In his book on conspiracy theories, Michael Shermer writes that seeing patterns everywhere – patternification – is the result of the need for control.

He writes: ‘the economy is not this crazy patchwork of supply and demand laws, market forces, interest rate changes, tax policies, business cycles, boom-and-bust fluctuations, recessions and upswings, bull and bear markets, and the like. Instead, it is a conspiracy of a handful of powerful people variously identified as the Illuminati, the Bilderberger group, the Council on Foreign Relations, the Trilateral Commission, the Rockefellers and Rothschilds’.

He continues: ‘conspiracists believe that the complex and messy world of politics, economics, and culture can all be explained by a single conspiracy and conspiratorial event that downplays chance and attributes everything to this final end of history’.

Instead of acknowledging messiness, complicated people, and multiple motives, conspiracy thinking sees a pattern as the result of purposeful agency in an attempt to control others.

Psychologists Mark Landau and Aaron Kay looked at studies that show how people compensate for perceived loss of control by trying to restore control themselves by ‘bolstering personal agency, affiliating with external systems perceived to be acting on the self’s behalf, and affirming clear contingencies between actions and outcomes’, and by ‘seeking out and preferring simple, clear, and consistent interpretations of the social and physical environments’.

In one study, participants were asked to think of an incident in their lives where they felt in control, while another group were asked to think of an incident where they weren’t. The latter group were more likely to believe in the conspiracy theories presented to them after.

Psychologists Joshua Hart and Molly Graether did a study and found that conspiracy believers, ‘are relatively untrusting, ideologically eccentric, concerned about personal safety, and prone to perceiving agency in actions’.

One of the most important findings in studies is that narcissism – the belief in one’s own superiority and need for special treatment – is a strong predictor of believing in conspiracies. Narcissists are also more sensitive to perceived threats.

As one paper notes, ‘the effect of narcissism on conspiracy beliefs has been replicated in various contexts by various labs’, and that, ‘narcissism is one of the best psychological predictors of conspiracy beliefs’. It continues: ‘grandiose narcissists strive to achieve admiration by boosting their egos through a sense of uniqueness, charm, and grandiose fantasizing’.

Narcissism arises out of paranoia, that threats are powerful, and narcissists tend to respond with a bolstered sense of ego – the need for personal dominance and control. The need to feel unique makes narcissists feel like they have access to special information that others don’t. (PETERSON IN HIS MAD SUITS)

It’s also been found that narcissists, ‘tend to be naïve and less likely to engage in cognitive reflection’. To put in bluntly, they’re more gullible. Narcissism has been linked to low levels of ‘intellectual humility’ by one study.

Obviously the entertainment industry is full of narcissists, who are particularly suited to voicing ‘special’ opinions and entertaining people. And there is a sense in which Brand knows this is entertainment. He says things like ‘you’re gonna love this story, its right up your ally’ – a strange way to frame a story if you think it’s existential: https://www.youtube.com/watch?v=fjGYsner6oI&ab_channel=RussellBrand

What you get is a kind of narcissistic news porn based on paranoia and a need for control. Brand’s an entertainer. I don’t want to be psychoanalysing anyone, but Joe Rogan, Elon Musk, and Brand – three major figures who talk about conspiracy theories a lot – come from a place where maybe they wished they had more agency, more control.

Musk had a very troubled and abusive childhood in South Africa, Joe Rogan has talked about how he moved around a lot, got bullied, and learned to fight to defend himself, and Brand has a well-documented history of addiction.

What this can lead to is a feeling of not being in control, a world of threat, and a sense of paranoia. Mirriam-Webster defines paranoia as, ‘systematized delusions of persecution’.

It leads to the need to form a narrative to help a person feel superior by having access to special knowledge about larger forces out to persecute them that they themselves have overcome.

In The Paranoid Style in American Politics, Hofstadter points out that the paranoid person sees an enemy that is pervasive, powerful, conspiratorial, pulling the strings, and, importantly, everywhere.

He writes that the proponents of the paranoid style ‘regard a “vast” or “gigantic” conspiracy as the motive force in historical events. History is a conspiracy, set in motion by demonic forces of almost transcendent power, and what is felt to be needed to defeat it is not the usual methods of political give-and-take, but an all-out crusade’.

Hofstadter continues that the enemy is, ‘a perfect model of malice, a kind of amoral superman: sinister, ubiquitous, powerful, cruel, sensual, luxury-loving’… ‘He makes crises, starts runs on banks, causes depressions, manufactures disasters, and then enjoys and profits from the misery he has produced.’ He controls the press, ‘manages the news’, brainwashes, seduces, has control of the educational system.’

For the paranoid, ‘Nothing but complete victory will do. Since the enemy is thought of as being totally evil and totally unappeasable, he must be totally eliminated’.

This is why Brand seems to get on so well with Tucker Carlson. Tucker is well versed in something that his former employer Fox News revolutionised: news as entertainment – flashy graphics, sensationalist language, us vs them narratives, a conspiracy involving every institution.

Fox News realised that it’s the ongoing narrative – good vs evil – that keeps viewers tuning back in, and so Carlson and Brand like him pick a story or study or witness that supports the long-running dramatic narrative that gets the views, rather than the other way around.

It’s not reporting, it’s not journalism, it’s not news, it’s entertainment – they make a few points and the rest is how it’s said, with anger, or charisma, with jokes, with a story of good vs evil. It’s shallow news porn.

 

Public Trust, Private Solutions

None of this is to defend a political system that’s failing ordinary people. None of this is to deny that inequality is widening, wealth is moving upwards, wages are stagnated, that people are underrepresented. None of it is to deny that lobbying, money, selfish interests, corporate greed all play a central role in politics. And none of this is to argue that there’s anything wrong with looking at big pharma’s financial incentives, criticising the great reset, or emphasising the concerns of farmers in climate policy. None of this is to say that we don’t need radical solutions.

What this is to absolutely reject is the framing. The paranoid style, the good vs evil narrative, the narrow selection of stories and evidence to suit your own dramatic narrative, the linking of every issue together into a totalising agenda.

Brand paints the mainstream media narrative as a lie; his is not only a bigger lie, but also a self-aggrandising and dangerous one.

George Monbiot writes about Brand that, ‘He appears to have switched from challenging injustice to conjuring phantoms. If, as I suspect it might, politics takes a very dark turn in the next few years, it will be partly as a result of people like Brand’.

If you’re not selecting the stories, facts, evidence you cover by their wider significance, if you’re picking up perspectives and narratives based on fringe evidence and ideas, then all you’re doing is being led by your own individualistic narcissistic ego. This is why Brand’s criticism of the mainstream media has only increased since an investigation into his very well-known behaviour was released. It’s obvious that this isn’t an objective analysis, it’s driven by his own fragility, his own little world.

And that’s when we get narcissistic news porn rather than careful study and analysis.

To paint the mainstream media as totally propagandised is to miss that people are multifaceted, complex, have competing incentives. What many missed about the investigation in the recent allegations against Brand is that it was as much an investigation into a BBC and Channel Four that facilitated Brand than about Brand himself.

Think about that. Channel Four aired an investigation into itself. Would you ever see that on Brand’s channel?

Brand does no original reporting, he sits in his shed and reads from journalists who have gone out and done the work, while at the same time howling about how terrible they are.

To be clear, again, I’m not saying that there aren’t many critiques of the mainstream media to be made, and more journalism, more independent voices, ultimately, are a great, potentially revolutionary, thing to be supported.

But when you totalise and cram everything into the ‘agenda’, you paint the world in paranoid, apocalyptic terms of us vs them that dehumanises the other as individuals to be gotten rid of, rather than look at real collective, structural solutions to the problems we face.

This is why Brand gravitates towards figures like Tucker Carlson. Carlson doesn’t want collective solutions. What he wants is more of the same but with him in charge. If every institution is tainted, part of the ‘centralising agenda’, you get libertarianism, you get more corporate power, more greed, more unregulated pollution, more inequality. You get the opposite of what we need.

If you portray every institution as part of an agenda then what’s left to do? Revolution, maybe? But then what? Where are your solutions? What’s your theory? What replaces the current system?

Conspiracy thinking is the easiest type of thinking – everyone does it. It’s easy for showmen like Brand because at the end of reading off a few quotes from one source you can just link them to the agenda, the great reset, a ‘centralising agenda’, and Bill Gates.

It’s like having a safety blanket to return to that says don’t worry, the world is evil, but you know the truth, you have it figured in a simple little package, don’t worry, you never have to think again.

 

Sources

Terry Pinkard, Does History Make Sense,

Irrationality: A History of the Dark Side of Reason

https://www.bbc.co.uk/news/entertainment-arts-66369532

Jan-Willem van Prooijen, The Psychology of Conspiracy Theories

https://theintercept.com/2020/12/08/great-reset-conspiracy/

Richard Hofstadter, The Paranoid Style in American Politics

https://www.dailymail.co.uk/news/article-10186363/Researchers-running-arm-Pfizers-Covid-jab-trials-falsified-data-investigation-claims.html

https://theconversation.com/vaccine-trial-misconduct-allegation-could-it-damage-trust-in-science-171164

https://inews.co.uk/news/russell-brand-advertisers-pulling-ads-rumble-site-comedian-videos-2633281?ito=twitter_share_article-top

https://ec.europa.eu/commission/presscorner/detail/en/IP_23_2507

https://www.theguardian.com/environment/2022/nov/30/peak-polluters-last-chance-close-dutch-government

Steven Umbrello, Should We Reset?

Michael Christensen and Ashli Au, The Great Reset and the Cultural Boundaries of Conspiracy Theory

Ivan Wecke, Conspiracy Theories Aside, There is Something Fishy about the Great Reset

Michael Shermer, Conspiracy: Why the Rational Believe the Irrational

Aleksandra Cichocka, Marta Marchlewska, Mikey Biddlestone, Why do narcissists find conspiracy theories so appealing?

Cosgrove TJ and Murphy CP, Narcissistic susceptibility to conspiracy beliefs exaggerated by education, reduced by cognitive reflection

https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=3750&context=historical

https://reclaimthenet.org/rfk-jr-sues-mainstream-media-misinformation-cartel

https://www.bbc.co.uk/beyondfakenews/trusted-news-initiative/role-of-the-news-leader/

https://www.hollandtimes.nl/articles/national/tata-steel-environmental-threat-or-essential-industry/

https://www.bmj.com/content/375/bmj.n2635

https://pubmed.ncbi.nlm.nih.gov/25688696/

 

The post CONSPIRACY: The Fall of Russell Brand appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/12/05/conspiracy-the-fall-of-russell-brand/feed/ 0 949
A Note on Expertise https://www.thenandnow.co/2023/11/09/a-note-on-expertise/ https://www.thenandnow.co/2023/11/09/a-note-on-expertise/#respond Thu, 09 Nov 2023 11:59:14 +0000 https://www.thenandnow.co/?p=1017 When deciding what to make videos about, I am usually drawn between several factors: most obviously, what I want to make. Second, what I think will do well, or what my audience would be interested in. Third, what I feel a responsibility to make. The first can be gratifying, authentic, but also self-indulgent and unpopular. […]

The post A Note on Expertise appeared first on Then & Now.

]]>
When deciding what to make videos about, I am usually drawn between several factors: most obviously, what I want to make. Second, what I think will do well, or what my audience would be interested in. Third, what I feel a responsibility to make.

The first can be gratifying, authentic, but also self-indulgent and unpopular. I have to make a living and ensure the viability of the channel in the future, and this isn’t always the way to do it.

The second – what I think an audience wants to see – is a useful corrective, but also keeps me connected to what other people think is important, what they want to watch. Taken to the extreme, this can lead to ‘selling out’, but I think when considered with the first, it allows you to think through how to meet your audience ‘where they are.’ It stops you from speaking simply to and for yourself, and forces you to consider how to connect with as many people as possible, which in turn might increase the influence you can build for when you do want to make something very personal or otherwise unpopular.

If you can spark something in people, earn their trust, understand their viewpoint, and then try to convince them of what you want to say, what you believe, then the argument will be all the better for it.

Then there’s the third factor – responsibility. Hopefully, this always has some influence on both. But sometimes there are topics that I am not hugely interested in spending my time on personally, nor are they likely to be the most popular. However, knowing they are important nonetheless, I am always drawn to spend as much time as I can at least understanding them. This is especially the case when the topic overlaps with my background in History and Politics.

I think that, in most cases, if you know more than the median voice on Youtube, you have a responsibility to try to say something. Otherwise, the public sphere is left open to those who have big mouths, small minds, and zero tolerance for research.

Sometimes, there is a tipping point; a moment when you feel a reasonable grasp on the literature and get a sense of the public discourse; when you feel compelled to say something rather than nothing. Because in this libertarian media ecosystem, I’ve seen videos with millions of views commenting with zero expertise, but also, experts who are very knowledgeable making dogmatic arguments which I know can be quite reasonably refuted by other experts. Furthermore, the mainstream media seems incapable themselves of providing good longform explainers and analysis. Both of which are increasingly rare, as all parties are incentivised to release sensationalist, short, and frequent content, in an arms race for clicks.

Ultimately, the ideal is to spend my time on topics which fit neatly into the middle of the Venn diagram of all three – what I want to make, what people will be interested in, and what I have a responsibility to make. The needle will always be shifting, but whether I am successful in balancing those factors, I’ll leave to you.

The post A Note on Expertise appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/11/09/a-note-on-expertise/feed/ 0 1017
Understanding Israel and Palestine: A Reading List https://www.thenandnow.co/2023/11/05/making-sense-of-israel-and-palestine/ https://www.thenandnow.co/2023/11/05/making-sense-of-israel-and-palestine/#comments Sun, 05 Nov 2023 13:43:15 +0000 https://www.thenandnow.co/?p=1010 It’s important to note that I am not an expert. However, I do have a background in history, philosophy, politics, and international relations, as well as relevant cursory knowledge to draw upon. For the past month or so I have been reading as widely as possible. More importantly, I have – to the best of […]

The post Understanding Israel and Palestine: A Reading List appeared first on Then & Now.

]]>
It’s important to note that I am not an expert. However, I do have a background in history, philosophy, politics, and international relations, as well as relevant cursory knowledge to draw upon. For the past month or so I have been reading as widely as possible. More importantly, I have – to the best of my ability – been carefully selecting sources from different perspectives and trying to understand the people and debates. Because the online space seems bereft of reasonable longform analysis, I have decided to list what I’ve been reading here with a few comments. I will continue to add to it.

I’ve organised it loosely into books and longform articles. I will add some films, too.

 

Books

 

Abdel Monem Said Aly, Khalīl Shiqāqī, and Shai Feldman, Israelis and Arabs: Conflict and Peacemaking the Middle East
https://www.bloomsbury.com/uk/arabs-and-israelis-9781350321380/

The best general academic overview I’ve come across. Detailed and sensitive to different narratives. I think is a long but invaluable starting point. The authors go through the more important historical moments, then present narratives that are commonly held, for example, in Palestine, in Israel, in Arab States, or in the US, etc. The authors then attempt a short analysis comparing each.

Rashid Khalidi, The Hundred Years’ War on Palestine
https://www.amazon.co.uk/Hundred-Years-War-Palestine/dp/178125933X

Rashid Khalidi is probably the most well-known Palestinian-American historian working today. He is a professor of Modern Arab Studies at Columbia University. This is a morally charged narrative history which foregrounds Zionism as a settler-colonial movement, and the displacement of the Palestian people. It’s forceful, well-received but not without its critics, and concludes to the continuing marginalisation of the Palestinians in Oslo Accords.

This NYTimes review is worth reading: https://www.nytimes.com/2020/01/28/books/review/the-hundred-years-war-on-palestine-rashid-khalidi.html

Ari Shavit, My Promised Land
https://www.amazon.co.uk/My-Promised-Land-Triumph-Tragedy/dp/0385521707

If you think of early Zionists as ‘evil’ colonists and occupiers, then this book is a useful corrective. It highlights the contradictions, romanticism, idealism, persecution, and naivety that motivated Zionists fleeing Europe in the late 19th century and on. Drawing on Shavit’s own family history, it’s movingly and personally written. Shavit asks how his well-intention Zionists moving excitedly to Palestine to build new lives did not see the people already there. Or maybe did not care.

Alpaslan Özerdem, Roger Mac Ginty, Comparing Peace Processes
https://www.routledge.com/Comparing-Peace-Processes/Ozerdem-Ginty/p/book/9781138218970

The relevant chapter is a good summary of the peace process since the Oslo Accords and concludes compellingly how one-sided the peace process has been.

Benny Morris, 1948 and After
https://global.oup.com/academic/product/1948-and-after-9780198279297

Benny Morris is one the ‘new historians’ who challenged the traditional historical narrative in Israel. This is a good introduction to the debates and historiography that surround the 1948 war and beyond. The 1948 moment is probably the most crucial in understanding what motivates both the Israeli right and Palestinians, in particular.

Thomas Friedman, From Beirut to Jerusalem
https://www.amazon.co.uk/Beirut-Jerusalem-Thomas-L-Friedman/dp/1250034418

I have only just started this, but Friedman is widely regarded to be one of the best authors on the Middle East, spending many years living and reporting from both Beirut and Jerusalem. The preface alone is the best introduction I’ve read to the complex politics, relationships, and wars of the surrounding countries, particularly in Lebanon. It gives you a good sense of the complexity of the entire region.

John J. Mearsheimer and Stephen M. Walt, The Israel Lobby and U.S. Foreign Policy
https://www.hks.harvard.edu/publications/israel-lobby-and-us-foreign-policy

Walt and Mearsheimer s influential claim that AIPAC has a disproportionate influence on foreign policy, which they argued would be much more effectively directed elsewhere. There is the paper and the latter book.

Asima Ghazi-Bouillon, Understanding the Middle-East Peace Process
https://www.routledge.com/Understanding-the-Middle-East-Peace-Process-Israeli-Academia-and-the-Struggle/Ghazi-Bouillon/p/book/9780415853200

This book also focuses on the new historians, but also the wider academic context in Israel, looking at concepts like ‘post-Zionism’ – that Zionism is over, has fulfilled its goals, and should be superseded. And ‘neo-Zionism’ – that new battles over things like demographics have begun. It is quite dense, drawing on philosophy and theory to think through the different discourses. But is a useful frame if you want to understand how Israeli academia has concrete effects on what happens.

Avi Shlaim, Israel and Palestine: Reappraisals, Revisions, and Refutations
https://www.versobooks.com/en-gb/products/2094-israel-and-palestine

A broad and accessible overview of the history from the Balfour Declaration on, including discussions of the different debates in the historiography, especially on the most contentious moments.

 

Longform articles

 

Haaretz, A Brief History of the Netanyahu-Hamas Alliance
https://www.haaretz.com/israel-news/2023-10-20/ty-article-opinion/.premium/a-brief-history-of-the-netanyahu-hamas-alliance/0000018b-47d9-d242-abef-57ff1be90000

Makes the case that the Netanyahu government and Hamas benefit from each other.

A Threshold Crossed: Israeli Authorities and the Crimes of Apartheid and Persecution.
https://www.hrw.org/report/2021/04/27/threshold-crossed/israeli-authorities-and-crimes-apartheid-and-persecution

A thorough 200+ page report by Human Rights Watch describes how, by the ICC’s own definitions, the Israeli government is pursuing policies that can be described as Apartheid in the West Bank by among other things, restricting freedom of movement and assembly, denying building permits for Palestinians but not Israelis, controlling water supplies, denying right of return for Palestinians and not Israelis, and effectively ruling over two-tier society.

Avi Shlaim, The War of the Israeli Historians
https://users.ox.ac.uk/~ssfc0005/The%20War%20of%20the%20Israeli%20Historians.html#:~:text=This%20war%20is%20between%20the,years%20of%20conflict%20and%20confrontation.

A good introduction to a civil ‘war’ within Israel between two interpretations of the country.

Shlaim writes ‘this war is between the traditional Israeli historians and the ‘new historians’ who started to challenge the Zionist rendition of the birth of Israel and of the subsequent fifty years of conflict and confrontation’.

He continues, ‘the revisionist version maintains, in a nutshell, that Britain’s aim was to prevent the establishment not of a Jewish state but of a Palestinian state; that the Jews outnumbered all the Arab forces, regular and irregular, operating in the Palestine theatre and, after the first truce, also outgunned them; that the Palestinians, for the most part, did not choose to leave but were pushed out; that there was no monolithic Arab war aim because the Arab rulers were deeply divided among themselves; and that the quest for a political settlement was frustrated more by Israeli than by Arab intransigence.’

New Yorker, Itamar Ben-Gvir, Israel’s Minister of Chaos
https://www.newyorker.com/magazine/2023/02/27/itamar-ben-gvir-israels-minister-of-chaos

A good primer on the far-right in Israel.

 

More

I haven’t examined it in detail, but this reading list from UCLA looks useful: https://www.international.ucla.edu/israel/article/270276

 

 

The post Understanding Israel and Palestine: A Reading List appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/11/05/making-sense-of-israel-and-palestine/feed/ 1 1010
The Origins of the Israel/Palestine Conflict https://www.thenandnow.co/2023/11/02/the-origins-of-the-israel-palestine-conflict/ https://www.thenandnow.co/2023/11/02/the-origins-of-the-israel-palestine-conflict/#comments Thu, 02 Nov 2023 14:59:14 +0000 https://www.thenandnow.co/?p=994 The difficulty with the conflict between Israel and Palestine is that it has so many components. Immigration, national identity, empires and colonialism, democracy, religion and modernisation, terrorism, victimisation and persecution, war. Even when focusing on the simplest building blocks of its very beginnings, we can see how more than anything, subtle emphases – differences between […]

The post The Origins of the Israel/Palestine Conflict appeared first on Then & Now.

]]>
The difficulty with the conflict between Israel and Palestine is that it has so many components. Immigration, national identity, empires and colonialism, democracy, religion and modernisation, terrorism, victimisation and persecution, war.

Even when focusing on the simplest building blocks of its very beginnings, we can see how more than anything, subtle emphases – differences between well-intentioned observers – matters.

Because of this, I’ve carefully selected three main sources, and drawn on others. The first, and one I recommend the most, is a very readable textbook called Arabs and Israelis: Conflict and Peacemaking in the Middle East. It’s by three scholars: Abdel Monem Said Ally, Shai Feldman, and Khalil Shikaki, and it pays careful attention to different historical narratives before analysing them as even-handedly as possible.

Then, Palestinian-American historian Rashid Khalidi’s, The Hundred Years’ War on Palestine is from a Palestinian perspective, while Israeli writer Ari Shavit’s My Promised Land is from an Israeli one.

Of course, even referring to a perspective as ‘Israeli’ or ‘Palestinian’ is an enormous oversimplification, ignoring the vast differences there always are within and between groups. I’ve also drawn on a few historians who’ve been labelled Israeli ‘new historians’ – this loose group have challenged a traditional historical narrative in Israel, something we’ll come to. The literature on this is vast, intellectual humility is required, and so I will focus only on the origins. I’ll also return to a note on how and why I’ve approached this in the way I have at the end.

Towards the end of the 19th century, outbreaks of violence against Jews called pogroms increased across Eastern Europe.

In most countries, Jews were second class citizens. They couldn’t own land, vote, had different and varying legal rights, and were marginalised, lived in ghettos, and often randomly blamed for problems and were targeted and murdered.

This was coming to a head in the last two decades of the 19th century.

In 1881, in the Russian Empire, Jewish communities were attacked after Tsar Alexander II was assassinated and one of the conspirators had incidentally had Jewish ancestry. A wave of pogroms resulted. But this was just one of many instances. In modern day Moldova in 1903, 49 were killed, and many more injured, raped, and homes were attacked.

undefined

It’s important to remember that this is relatively borderless period. Palestine had been administered by the decaying Ottoman Empire for centuries. It was home to a small number of Jews already who lived peacefully with a majority of Arabs, mainly Muslims, with a few Christians.

This was a period very different from today. Empires were the norm, borders were always changing, but the idea of ‘nation-states’, that peoples had the right to self-determine, to govern themselves, was on the rise. In 1800 the population of Palestine was 2% Jewish – some 6700 Jews. By 1890, 42,000 Jews had moved there, while the Arab population was around 500,000. By 1922, the Jewish population had doubled to 83,000.

Towards the end of the 19th century, Jewish settlers started buying land from absent urban Arab landlords, leading to the displacement of the Arab peasants who had worked the land. 500 Arabs signed a letter of complaint to the Ottomans about this in 1891.

In My Promised Land, Ari Shavit describes the complex and sometimes contradictory motivations of the young Zionist movement at the end of the 19th century. For some, fleeing violence, it was a matter of life and death, for others, like his own British great-grandfather, it was a complex choice, one comprised of solidarity with those fleeing persecution, a romantic idea of the Holy Land, and a modern idea of it too – that a new thriving modern future could be built in a land that was widely and falsely seen as empty.

Judaic Studies professor David Novak has written: ‘The modern Zionism that emerged in the late nineteenth century was clearly a secular nationalist movement’. However it had deep religious and historical roots to draw on as well – that Palestine was the Jewish ancestral homeland, the Exodus from Egypt to the promised land, and later exiles from the region, and returns. But Zionism was never unified – many, many disagreed, religious and secular alike, and those who agreed or became Zionists did so for many reasons. Shavit points out that travellers from places like Britain didn’t see Palestine for what it was. They saw empty desert. They saw a few Bedouin tribes. They saw possibility. They didn’t see the Palestinian villages and towns, or maybe, he says, they chose to ignore them?

They also saw poverty – dirt huts and tiny villages. They believed, or said they believed – as many colonists also claim, it’s important to note – that the indigenous population would benefit from Jewish capital, education, technology, and ideas, and it’s true that many did.

Drawing on his grandfather’s diaries, Shavit asks why his grandfather ‘did not see’. After all, he was served by Arab stevedores, Arab staff at hotels, Arab villagers carried his carriages, was led by Arab guides and horseman, was shown Arab cities.

He uses a word: blindness. They were too focused on a romantic ideal of the area and the tragic oppression they were fleeing from. Shavit writes: ‘Between memory and dream there is no here and now.’

Not everyone was blind, though. At the beginning of the 20th century one Zionist author, Israel Zangwill, gave a speech in New York that reported that Palestine was not empty. That they would have to ‘drive out by sword the tribes in possession, as our forefathers did.’

This was heresy. No one wanted to hear it. He was ignored.

So between 1890 and 1913, around 80,000 Zionists emigrated. In the short period between WWI and WWII the same number again. But this snowballed with the rise of Nazism in the 1930s. Between 1933-1940, 250,000 fled Germany. In 1935 alone, 60,000 moved to Palestine. More than the entire Jewish population in 1917.

With this came millions in capital and investment, and successful settlements, villages, and towns began growing.

This huge demographic movement coincided with the most important shift of power in the region. The defeat of the Ottoman Empire during WWI and the subsequent British takeover of control.

During WWI, Zionists in Palestine provided valuable information to Britain, formed spy networks, and volunteered to fight.

At the same time, a coalition of Arabs supported Britain by rising up against the Ottomans in the Great Arab Revolt. In return they were promised an independent Arab state by the British.

But Britain made several contradictory promises in quick succession.

In 1917, the Balfour Declaration – a memo between Foreign Secretary Lord Balfour and Lord Rothschild – committed the British Government to a home for the Jewish people in Palestine.

The Balfour declaration neglected to mention the word Arab, who comprised 94% of the population. It read: ‘His Majesty’s government view with favour the establishment in Palestine of a national home for the Jewish people, and will use their best endeavours to facilitate the achievement of this object, it being clearly understood that nothing shall be done which may prejudice the civil and religious rights of existing non-Jewish communities in Palestine, or the rights and political status enjoyed by Jews in any other country.’

Here lies the root of the conflict; the contradictory promise: ‘when the promised land became twice promised’, in the words of historian Avi Shlaim.

Reporting this news in Palestine was banned by the British.

Instead, after the defeat of the Ottomans, the British and French divided the area into spheres of influence under the Sykes-Picot Agreement in 1916, leaving Palestine as a British mandate under British control. This was the famous ‘line in the sand’, made by people who had little knowledge of the area.

In a private 1919 memo only published 30 years later, Lord Balfour admitted: ‘In Palestine we do not propose even to go through the form of consulting the wishes of the present inhabitants of the country… The four Great Powers are committed to Zionism. And Zionism, be it right or wrong, good or bad, is rooted in age-long traditions, in present needs, in future hopes, of far profounder import than the desires and prejudices of the 700,000 Arabs who now inhabit that ancient land.’

The British Mandate gave the Jewish Agency in Palestine status as a public body to help run the country. Jewish communities and leaders formed institutions for self-defence and governance, which the British slowly recognised, essentially becoming a government in waiting.

As a result, outbreaks of violence began to increase in the 1920s, getting progressively worse. In 1929, hundreds of Jews and Arabs were killed and hundreds more wounded at the Western Wall in Jerusalem. Tensions rose, resulting in a series of massacres of Jews by Arabs, one of which in Hebron resulted in the death of almost 70 Jews and the injuring of many more. In response to the violence, the British declared a state of emergency. They proposed a legislative council that would be comprised of six nominated British and four nominated Jewish members, and twelve elected members, including two Christians, two Jews, and eight Muslims.

Seeing themselves as outnumbered on a governing panel in a country in which they were the clear majority, Palestinians rejected the proposal. Another was proposed that was slightly fairer to the Palestinians, but this time it was rejected by the Zionists and British parliament.

During the largest wave of immigration as the Nazis came to power, Palestinians called for a general strike demanding an end to Jewish migration and the sale of land to Zionists by absentee urban landlords, which continued to dispossess peasants working the land.

undefined

In 1936, an Arab revolt started when gunmen shot three Jews, setting off a series of attacks and counterattacks, leading to the deaths of around 415 Jews and 101 British. The British response was swift and brutal. 5000 Arabs were killed by the British, violence continued into 1937, and many were imprisoned and exiled. 10% of the Arab population were killed, injured, exiled, or imprisoned.

Kahlidi puts the figure higher, writing: ‘The bloody war waged against the country’s majority, which left 14 to 17 percent of the adult male Arab population killed, wounded, imprisoned, or exiled.’

Said Ally, Feldman, and Shikaki write that it was ‘disastrous for the Palestinians.’

In one instance an 81-year-old rebel leader was executed after being found with a single bullet. The British tied Palestinian prisoners to the front of their cars to prevent ambushes. Homes were destroyed. Many were tortured and beaten, including at least one woman.

However as a result of the unrest, in 1937 a British government report recommends two states for the first time. The Arab state, though, would not be Palestinian. It was to be merged with Transjordan.

In 1939, British government policy, put forward in a white paper, decided to call for a single jointly administered Palestine, and limited Jewish immigration and land sales.

The Holocaust changed all this. And even more disastrous for the Palestinians was the leadership’s decisions to side with Hitler in 1941, as he had told them that the Nazis had no plans to occupy Arab lands.

As the true extent of the Holocaust became clearer, the plight of European Jews became more urgent in the eyes of European and US policymakers. It’s crucial to remember the extent of the horror – six million Jews industrially murdered. After the war, there were 250,000 Jews living in refugee camps in Germany alone. Britain was bankrupt and was pulling out of many of its former colonies. Syria, Lebanon, Jordan, and Egypt gained their independence, and they formed the Arab League.

More plans were proposed, including the Morrison-Grady Plan in 1946 calling for two separate autonomous Arab and Israeli regions under British defence, which was again rejected by both Zionists and Palestinians.

A UN plan in 1947 proposed 43% of the area going to Palestinians, despite them comprising two thirds of the population. It was rejected by the Arab Higher Committee who called for a three-day general strike.

The newly independent (or quasi-independent, at least) surrounding Arab states were becoming increasingly hostile to Zionism and the plight of the Palestinians. But they also saw potential to either increase their own territory or to gain power in the region. Egypt saw itself as a new Ottoman Empire. King Abdullah of Transjordan saw Palestine as part of Transjordan. He thought that victory in the war against Israel would be secured in ‘no more than ten days.’

The USSR, seeing the potential of a state of Israel as a socialist ally, provided weapons to the Zionists. Seeing themselves as decisively outnumbered and outgunned, with no tanks, navy, or aircraft (the Arab countries, to varying degrees, did have this equipment), Ben-Gurion secured a deal with Czechoslovakia for $28m worth of weapons and ammunition, increasing their supply by 25% and ammunition by 1000%. In 1968, Ben-Gurion remembered, ‘the Czech weapons truly saved the state of Israel. Without these weapons we would not have remained alive’.

By now the Palestinians and Zionists were in a state of civil war, with continued attacks and counterattacks.

In early 1948, knowing the British would leave, Arab countries were preparing to invade and Jewish state institutions-in-waiting prepared a plan of defence. And there were already Jewish settlements outside of the proposed UN partition boundaries, and of course, many Palestinian areas within.

Zionist leadership prepared what was referred to as Plan D, which included, ‘self-defense against invasion by regular or semi-regular forces’, and ‘freedom of military and economic activity within the borders of the [Hebrew] state and in Jewish settlements outside its borders’.

All of this was made worse by British bankruptcy and a hard-line Zionist militant group called Irgun, who bombed the British Mandate headquarters, killing 92 people, and were involved in skirmishes with Palestinians. In one attack in April of 1948, Irgun killed 115-250 men, women, and children in a village near Jerusalem, despite a non-aggression pact.

So on 15 May 1948, the British left. The day before, David Ben-Gurion declared the establishment of the new state of Israel. The day after, a coalition of Arab forces from Egypt, Jordan, Syria, Lebanon, and Iraq invaded.

For the most part, Israel captured and defended the areas allotted to them by the 1947 UN plan, as well as areas outside of it.

Hundreds of thousands of Palestinians were forced to flee their homes. Palestinians call it the Nakba – the Catastrophe.

The result of the war was the Gaza strip coming under Egypt’s control, the West Bank contested but under the control of Jordan’s forces, to be annexed in 1950, and anywhere between 400,000 and a million Palestinians displaced.

There is complexity, and this is only a small fraction of this story, but it’s impossible to ignore that the Nakba was a catastrophe – power differentials, foreign influence, empire, failures to compromise, perpetration of atrocities, the loss of homes and land that would never be returned to. The Palestinians were divided, outnumbered, and kept weak by Britain, Zionists, the US, the USSR and their surrounding Arab neighbours.

Journalist Arthur Koestler famously said that, ‘One nation solemnly promised to a second nation the country of a third’.

While British Prime Minister Neville Chamberlain had tried to limit immigration to Palestine, he was replaced by Winston Churchill, one of the biggest supporters of Zionism in British public life. In 1937 Churchill said of Palestine that: ‘I do not agree that the dog in a manger has the final right to the manger even though he may have lain there for a very long time. I do not admit that right. I do not admit for instance, that a great wrong has been done to the Red Indians of America or the black people of Australia. I do not admit that a wrong has been done to these people by the fact that a stronger race, a higher-grade race, a more worldly wise race to put it that way, has come in and taken their place’.

In response to the UN planning to partition Palestine in 1947, several Arab countries warned, or even threatened, violence against Jews in their own countries and expulsion. In 1950 and 51 Iraq withdrew Jews of their Iraqi nationality and property rights. Antisemitism in Yemen led to the migration of 50,000 Jews between 1949-1950. There were attacks on Jews in Tripoli before the war in 1945. Whether punitive policies and attitudes began before the war or as a result of it is a matter of debate.

What becomes clear, though, is that moral questions depend on the minutiae of often unanswerable questions; ones that historians are still, often acrimoniously, debating.

Who, which groups and subgroups, were most responsible for violence in ‘47? Were 19th century Zionists ‘blind’, ‘altruistic’, in existential danger? Are they colonisers in the usual sense? Or victims fleeing from violence in Europe?

Shavit writes that, ‘these pilgrims do not represent Europe. On the contrary. They are Europe’s victims. And they are here on behalf of Europe’s ultimate victims.’

Anyone who tells you that answers are easy to come by are wrong. Antisemitism was at its height in the 1940s. The Holocaust had just happened. Jewish immigrants had purchased land and settled in Palestine peacefully for decades. But amongst these difficulties, there are some indisputable facts. The UN partition plan offered Palestinians 43% of the land despite them comprising 68% of the population. And around 700,000 Palestinians became refugees.

Shavit cites a letter written from an Israeli he knew who fought the 1947-48 war. He wrote about the time: ‘when I think of the thefts, the looting, the robberies and recklessness, I realize that these are not merely separate incidents. Together they add up to a period of corruption. The question is earnest and deep, really of historic dimensions. We will all be held accountable for this era. We shall face judgment. And I fear that justice will not be on our side’.

And this is one report from an Israeli military governor, reporting a conversation with Palestinian dignitaries when Palestinians were forced from the small city of Lydda in 1948:

DIGNITARIES: What will become of the prisoners detained in the mosque?

GOVERNOR: We shall do to the prisoners what you would do had you imprisoned us.

DIGNITARIES: No, no, please don’t do that.

GOVERNOR: Why, what did I say? All I said is that we will do to you what you would do to us.

DIGNITARIES: Please no, master. We beg you not to do such a thing.

GOVERNOR: No, we shall not do that. Ten minutes from now the prisoners will be free to leave the mosque and leave their homes and leave Lydda along with all of you and the entire population of Lydda.

DIGNITARIES: Thank you, master. God bless you.

And in many cases, people left before the war broke out. In one case, the Israeli mayor even begged the Palestinians to stay. Although this was the only case.

For many years, the ‘Israeli’ narrative – although to call it that is far too simplistic, ignoring the disagreements, differences, and dissent within the conversation – was that the surrounding Arab states called upon the Arabs in Palestine to leave so that they could invade.

School books in Israel taught that Israelis wanted peace, but they were surrounded by enemies who wanted their destruction; that the Arabs fled to safety as a natural process of war.

This was challenged in the 1980s as official archives were opened, and a generation of ‘new’ Israeli historians looked differently at the period.

Benny Morris, one of those new historians, argued that there was no master plan of expulsion. However, it was understood that it was in the leadership’s interests to establish a Jewish state with as small of a minority of Palestinian Arabs as possible.

Most say the order came from Ben-Gurion himself. Those saying this include the later Prime Minister Yitzhak Rabin, who reported in his autobiography that Ben-Gurion had given him the order to expel the Palestinian Arabs in Lydda. When Rabin tried to publish this in 1979 it was censored.

What’s clear is that there was an overwhelming atmosphere – of fear, of exodus, of violence and beatings, of many massacres, of war in general – that led to 700,000 Palestinians leaving their homes, never to return.

 

Sources:

Understanding Israel and Palestine: A Reading List

The post The Origins of the Israel/Palestine Conflict appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/11/02/the-origins-of-the-israel-palestine-conflict/feed/ 2 994
The Shock of Modernity https://www.thenandnow.co/2023/10/26/the-shock-of-modernity/ https://www.thenandnow.co/2023/10/26/the-shock-of-modernity/#respond Thu, 26 Oct 2023 13:10:56 +0000 https://www.thenandnow.co/?p=988 The end of the nineteenth century was a period of unprecedented upheaval. Factories sprouted in masses, railways were laid at great length, urbanisation sprawled and beckoned, and the masses were organised capitalistically and politically. All of this happened at dizzying speed. This was the moment the modern world crashed together and dragged people from the […]

The post The Shock of Modernity appeared first on Then & Now.

]]>
The end of the nineteenth century was a period of unprecedented upheaval. Factories sprouted in masses, railways were laid at great length, urbanisation sprawled and beckoned, and the masses were organised capitalistically and politically.

All of this happened at dizzying speed. This was the moment the modern world crashed together and dragged people from the fields to the factory floor.

Within a generation, the entire consciousness of life had changed.

Science challenged deeply-held views of the world.

Darwin published On the Origins of Species in 1859.

He pulled the Gods down from the sky and transformed humans into just another animal.

This, of course, was shocking, traumatising, existentially threatening.

The philosopher Soren Kierkegaard wrote in 1844 that, ‘Deep within every human being there still lives the anxiety over the possibility of being alone in the world, forgotten by God, overlooked by the millions and millions in this enormous household’.

Nietzsche, famously proclaiming the death of God, argued that men would become nihilistic, lose their grounding, forsake their morals, if a new ethics of man did not come.

Darwin, the death of God, the prosperity of industry, science, all pointed towards something that could be terrifying: freedom.

Kierkegaard went on: ‘Anxiety may be compared with dizziness. He whose eye happens to look down into the yawning abyss becomes dizzy. But what is the reason for this? It is just as much in his own eyes as in the abyss . . . Hence, anxiety is the dizziness of freedom’.

Freedom was the expansion of options – of ways to live life personally, of political options, with commercial options.

Warfare was changing: swords and rifles, of which there were only a few, were being replaced by stuttering guns and spat bullets at an incomprehensible rate, artillery and bombs that sent shrapnel shredding in a cacophony of unbearable noise.

The word ‘panic’ was used for the first time in 1879 by the psychiatrist Henry Maudsley to describe extreme agitation, trembling, and terror.

People were nervous, literally – a new diagnosis became popular amongst America’s elites:  neurasthenia.

It was a contemporary form of stress, characterised by symptoms like fatigue, headache, and irritability.

Neurasthenia, according to physician Charles Beard, was the result of a depletion of nervous energy, but was becoming more common as a reaction to the anxieties of the modern world and of the demands of American exceptionalism. Neurasthenia was almost a fashion. Adverts appeared selling ‘nerve tonics’, self help books dominated the shelves, even breakfast cereals claimed to be able to cure ‘americanitus’.

Beard argued that there were five main causes of neurasthenia: steam power, the periodical press, the telegraph, the sciences, and the mental activity of women.

He argued that these phenomena contributed to the competitiveness and speed of the modern world.

Even time itself was to blame.

He wrote, ‘the perfection of clocks and the invention of watches have something to do with modern nervousness, since they compel us to be on time, and excite the habit of looking to see the exact moment, so as not to be late for trains or appointments. Before the general use of these instruments of precision in time, there was a wider margin for all appointments. We are under constant strain, mostly unconscious, often times in sleeping as well as in waking hours, to get somewhere or do something at some definite moment’.

The recently laid telegraphs also meant that prices and information could be sent around the world at a moment’s notice, piling the pressure on merchants to keep up with the latest news from all around the world.

According to the pre-psychological way of understanding the human mind, all of these phenomena hit the nerve endings, draining the life force.

Unnatural modern noises did this too.

Beard wrote: ‘Nature – the moans and road of the wind, the rustling and trembling of the leaves, and swaying of the branches, the roar of the sea and of waterfalls, he singing of birds, and even the cries of some wild animals – are mostly rhythmical to a greater or less degree, and always varying if not intermittent’.

As with Kierkegaard’s anxieties over freedom, for Beard, politics and religion also added to the drain: ‘The experiment attempted on this continent of making every man, every child, and every woman an expert in politics and theology is one of the costliest of experiments with living human beings’.

‘A factor in producing American nervousness is, beyond dispute, the liberty allowed, and the stimulus given, to Americans to rise out of the possibilities in which they were born’.

Excitement and disappointment were a drain on nerve-force.

But one innovation was so emblematic of the shock of modernity, of the distortion of time, of the inability of man to adapt to his surroundings, that it’s mentioned almost everywhere the topic is discussed:

The railway.

Historian Wolfgang Schivelbusch argues that the railways didn’t just change travel, but changed the very notion of time itself.

Before the railways, cities, towns, and villages had local times, which had to be standardised for train timetables. ‘London time ran four minutes ahead of time in Reading, seven minutes and thirty seconds ahead of Cirencester time, fourteen minutes ahead of Bridgwater time’. People could imagine being in other places much more easily, changing the very way they think.

It was such a part of the cultural zeitgeist of the time that on the third of October 1868, Illustrated London News reported that five theatres were all performing the same incident: someone tied to or unconscious on a track while a train came hurtling towards them.

These productions made use of modern special effects using lights and smoke, and The Times described them as a ‘perfect fever of excitement’.

The theatres performing these spectacles were open to people outside of the centre of London for the first time, who could travel in on the omnibuses or trains. The same transport they were about to be thrilled by their fear of.

Railway accidents were common. One in 1868 killed 33 people.

One passenger wrote, ‘We were startled by a collision and a shock. [. . .] I immediately jumped out of the carriage, when a fearful sight met my view. Already the three passenger carriages in front of ours, the vans and the engine were enveloped in dense sheets of flame and smoke, rising fully 20 feet. [. ..] [I] t was the work of an instant. No words can convey the instantaneous nature of the explosion and conflagration. I had actually got out almost before the shock of the collision was over, and this was the spectacle which already presented itself. Not a sound, not a scream, not a struggle to escape, or a movement of any sort was apparent in the doomed carriages. It was as though an electric flash had at once paralysed and stricken every one of their occupants. So complete was the absence of any presence of living or struggling life in them that it was imagined that the burning carriages were destitute of passenger’.

This idea of instantaneous death mixed with machinery was so new and so shocking, that it dominated the culture.

Charles Dickens himself was involved in a train crash and wrote the ghost story The Signal Man afterwards. According to his children, he was never the same again.

All of this – industry, commercialism, fear, anxiety, thrill, trains – culminated in an emphasis on sensation and the birth of sensationalism. The point was the senses. The modern world could trigger them, play on them, manipulate them, and sell to them, all at a tremendous speed.

The Irish playwright Dion Boucicault made sensation the centre of his plays. He intended to ‘electrify’ the audience.

A review of one of his plays illustrates this emphasis on the senses: ‘The house is gradually enveloped in fire [and] [. ..] bells of engines are heard. Enter a crowd of persons. [. . .] Badger [.. .] seizes a bar of iron, dashes in the ground-floor window, the interior is seen in flames. [. . .] Badger leaps in and disappears. Shouts from the mob. [. . .] [T]he shutters of the garret fall and reveal Badger in the upper floor. [. . .] Badger disappears as if falling with the inside of the building. The shutters of the window fall away, and the inside of the house is seen, gutted by the fire; a cry of horror is uttered by the mob. Badger drags himself from the ruins’.

Drama of such speed and excitement had rarely been seen before.

In the early 1860s, sensation novels suddenly became popular.

In 1866, an article in the Westminster Gazette lamented that all minor novelists were now sensationalists.

Literary critic D. A. Miller describes it like this: ‘The genre offers us one of the first instances of modern literature to address itself primarily to the sympathetic nervous system, where it grounds its characteristic adrenaline effects: accelerated heart rate and respiration, increased blood pressure, the pallor resulting from vasoconstriction, and so on.” H.L. Mansel wrote that ‘There are novels of the warming-pan type, and others of the galvanic battery type-some which gently stimulate a particular feeling, and others which carry the whole nervous system by steam’.

So, what was lost in these tumultuous years? I think Charles Beard and Kierkegaard, in many ways, hit it on the head. The idea of freedom, anxiety of choice, the cacophony of noise, the pressure of time all becomes demanding. A type of demand that didn’t exist in agricultural societies. Yes, life also became better, more prosperous – more options – but remembering what was lost is also important.

So, if modernity is still a shock to you then slow down, take some time, turn off your phone, stop thinking. Relax.

 

Sources

Allan V. Horwitz, Anxiety: A Short History

Nicholas Daly, Blood on the Tracks: Sensation Drama, the Railway, and the Dark Face of Modernity

Beard, American Nervousness

Mark Jackson, The Age of Stress

David G. Schuster, Neurasthenic Nation: America’s Search for Health, Happiness, and Comfort, 1869-1920

Nicholas Daly, Railway Novels: Sensation Fiction and the Modernization of the Senses

The post The Shock of Modernity appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/10/26/the-shock-of-modernity/feed/ 0 988
The Light Side of History https://www.thenandnow.co/2023/10/26/the-light-side-of-history/ https://www.thenandnow.co/2023/10/26/the-light-side-of-history/#comments Thu, 26 Oct 2023 09:22:55 +0000 https://www.thenandnow.co/?p=859 In December 1940, a 43-year-old policeman in London scratched his face on a rose bush. The small wound quickly turned septic, his face ballooned with abscesses and pus, one eye became infected and had to be removed, and the infection spread to his arm and lungs. He was in a huge amount of pain. An […]

The post The Light Side of History appeared first on Then & Now.

]]>
In December 1940, a 43-year-old policeman in London scratched his face on a rose bush. The small wound quickly turned septic, his face ballooned with abscesses and pus, one eye became infected and had to be removed, and the infection spread to his arm and lungs. He was in a huge amount of pain. An escalation like this seems like extreme bad luck to us today. But before antibiotics, life-threatening infection was so common that life expectancy was 47.

The policeman’s doctor decided to try a brand new drug, penicillin. He was the first person in the world to receive it.

Around ten years before, Alexander Fleming returned to his lab from holiday and found one of his petri dishes contaminated with mould. He noticed, though, that the mould inhibited the growth of the bacteria, so he took it and added it to other dishes, finding the same result.

After four days of treatment the policeman was making what his doctor described as a striking recovery. His temperature returned to normal and he was eating well. On the fifth day, though, the supply ran out. A month later, he died.

It’s been estimated that since, penicillin has saved the lives of maybe two hundred million people and saved countless others from excruciating pain. It is probably the most important life-saving discovery in human history.

But it also points to a paradox in thinking about ‘progress’ in history. Not only was it discovered by accident, the mould had floated up through a window accidently left open onto a petri dish left accidently out on a bench, rather than in an incubator, while the exceptionally cool weather for the time of year encouraged growth.

If such a lifesaving drug is the result of chance, how can we think about progress? What drives it? Is it guaranteed? Is it a myth?

Of course, it wasn’t just chance. Fleming was a practicing scientist embedded in the context of institutions, aims, methods, a particular culture, and so on.

And compare this story to what was going on at precisely the same time only a few hundred miles away, in Germany and Poland – millions were being systematically murdered while the innovations of science and technology were being put to good use by Europeans slaughtering each other on battlefields.

How do we make sense of this paradox – that the most important innovation in history, like other medical and scientific advances, was happening at the same time as the most devastating catastrophe?

The historian Will Durant said that, ‘Civilization is a stream with banks. The stream is sometimes filled with blood from people killing, stealing, shouting, and doing the things historians usually record, while on the banks, unnoticed, people build homes, make love, raise children, sing songs, write poetry, and even whittle statues. The story of civilization is the story of what happened on the banks. Historians are pessimists because they ignore the banks for the river’.

Is Durant right? Do we ignore the good in history? Are we all pessimists? How do we even begin to understand the good in history – how it unfolds, what drives it, what could promote, what we could learn from? There are countless difficulties. The first is, what does good even mean? What’s the measure? The criteria?

Some say health, others happiness, others wealth. Stability? Community? Equality? A postmodern critique that it’s impossible to rank values, to compare and classify, or to place any hope in grand narratives? What is a long life if it’s lived under tyranny? What is a wealthy life if those around you live in poverty?

However, if we were to begin with a loose meta-criterion that I think most would agree with while nevertheless disagreeing on precisely what it means, we’d land on something like liberty.

Liberty, broadly speaking, is the freedom to think, to speak, to do, to act, to be oneself, to go where one chooses, to strive in the way one wants to strive. To have as many of the ‘primary goods’ of life as possible in order to do so – food, shelter, transport, even things like good relationships, friendships, opportunities, and so on. Most, I think, would agree that generally, more of these things is better than less.

Liberty in this sense is neutral between competing ideological beliefs or political systems. It begins from a simple premise, that more possibility is better than less; the society that has better access to penicillin is better than the one where you’re more likely to be sent to a gas chamber.

The historical question then is to understand which historical conditions – institutional, political, cultural, philosophical – lead to an increase in liberty and which diminish it. What ideas about liberty seem to work? Where did they come from? Who built on them? Improved them? What diminished or restricted them? The historical question is to search for the causes of liberty so they can be identified and built upon today.

Hegel argued that history was the unfolding of reason through time. Martin Luther King, who read Hegel, argued that the moral arc of history bends towards justice. Marx argued that economic contradictions resolve through history, leading to a more equal society. And more recently some have claimed liberal capitalism as the end of history.

All of these claims are in some sense Hegelian, and the philosopher Terry Pinkard has recently argued in a work on Hegel that the end at work in history is the securing of justice as freedom.

Freedom is the relationship between desire, reasoning, acting on your desires, and recognition and authority. In other words, our desires don’t exist in a vacuum – we are in constant negotiation with others and their desires, with figures and systems of authority that act upon and direct our desires, and so on. Freedom is intersubjective. Social consciousness, culture, and institutions arise out of the interplay of our desires.

With this in mind, Pinkard asks if history makes sense. Is there logic in the way the interplay of desires plays out? Is history comprehensible? Or is it contingent? Random? Messy?

Hegel was a figure of the Enlightenment. Like Kant before him, he believed in a scientific approach to the world – and that included history. He argued that science was bringing the phenomena of the world around us – in nature, in humans, in everything – under ‘the concept’.

What he meant by this was that we have ideas of things – we have ideas of ourselves, our desires, of others, of history. We categorise things – we look at the qualities of things, the causes of things. The historian looks at the causes of World War II, for example.

Importantly, it’s this ability to go about the messy work of building up ideas that makes us human and provides the possibility of even having a history in the first place.

A mouse has a past, but it has no real history. We have ideas of how we acted, why we acted, how we’ve changed since. A mouse may have a drive to eat which it acts on but a human has a concept of eating under which reasons for eating, what to eat, when to eat, what’s healthy, how to farm, where to shop are categorised under the idea or concept of eating.

What Hegel is showing is how we make sense of the world – that from our ideas and concepts we make judgements about how to act. Once we understand this we can understand that the idea of salad is a historical one. We’ve brought more understanding under the concept of salad – its chemical composition, its effects, the best ways of growing, distributing, eating it, and so on.

Humans develop conceptions over time – at times ideas fall apart and are discarded and at other times they develop and are adopted. The biblical idea that the sun went around the earth fell apart as it was observed that the opposite was true, so the idea that the bible was the guide to wisdom was slowly superseded by an emphasis on observation and empiricism.

Pinkard writes that, ‘the components of the “Idea” arise in history, but as humans reflect on those concepts, put them to use, and modify them in the course of their collective lives, they refashion them into overall schemes of intelligibility’.

Hegel was expanding on Spinoza’s point that modern scientific enquiry expands outwards towards the ‘perspective of infinity’, by looking at the causes and qualities of the things that help us expand upon our desires and interests.

Pinkard writes that, ‘Hegel concludes that freedom is the capacity to make what truly matters effective in one’s life, and, in modern times, that more or less comes down to acting on our own reasons rather than on vague feelings of guidance from nature, the gods, or those who claim to rule us by natural right’.

This is obviously not just an individual process. Our own ideas and desires come into conflict with others. There are disagreements that play out in culture, institutions, norms, practices, political decisions, etc.

Pinkard writes: ‘history is an arena in which people seek and have sought reconciliation — that is, a kind of justification of their lives — in their social worlds, and they have sought this both individually and collectively’.

When it comes to the meta-criterion of liberty, denouncing fascism is thought of as the same as trying to eat more salad. An individual, directed by education, cultural context, social information, makes a judgement that the former had the effect of reducing liberty in the past and the latter has the effect of increasing energy and lifespan.

Hegel says that we emerge from a ‘realm of shadows’ and move towards the light of the ‘space of reasons’.

If this is true, we should be able to establish some points of historical progress. Which ‘shapes of consciousness’, to use Hegel’s term, which ideas, practices, institutions in history promote liberty?

For Hegel, the process developed as history unfolded from one being free – a king or emperor, free to make their own decisions – to many being free – i.e. an aristocracy – to all, in principle at least, being free.

Hegel argued that pre-Greek societies were paternalistic and authoritarian, that they were ‘rule-followers’ that didn’t interrogate the reasons for following or abandoning certain rules. And that the Persians, Egyptians, Indians, and Chinese civilisations that preceded the Greeks didn’t approach the world and people as ideas to be studied but instead were absorbed in the world. They didn’t have reflective critical distance. Without these mechanisms for self-criticism there can be no movement in history.

It’s important to note his interpretation of ancient history has been criticised a lot since, but for our purposes, the important point is less where it started, but the idea of reflective distance on the world being important – the questioning of why some ideas or rules are adopted. The Greeks, he thinks, were ‘self’-conscious – they had a particularly acute idea of the self and asked questions about it.

It’s under these conditions that the question is more forcefully asked: who are ‘the people’? What does ‘freedom’ mean? Who rules?

Pinkard writes, ‘The Greek miracle, as it were, was its creation of the polis, a new form of social and political organization in history in which the ability to defend the community united with an ancient conception of justice into a new kind of unity that broke with the past and thereby combined the advantages of the emotional closeness and solidarity of traditional tribal life with the reflective and economic advantages of an urban life’.

What we have developing is an idea of freedom.

For the Greeks, what made someone free was self-sufficiency – that they weren’t under the sway of others, that they had the means to make decisions and live by their own means, own desires, and that, in Aristotle’s phrase, a person was a ‘law unto himself’. He continued that, ‘it is the mark of a free man not to live at another’s beck and call’. Freedom meant not being compelled, it meant to be self-directing, and crucially, it meant not being a slave.

But women and slaves were excluded. The community had ultimate authority over the individual. The Greek polis and its face-to-face direct democracy struggled to grow.

Benjamin Constant wrote, ‘if this was what the ancients called liberty, they admitted as compatible with this collective freedom the complete subjection of the individual to the authority of the community’.

In some ways, Rome expanded on Greece’s idea and managed to grow by granting citizenship to many of the areas it conquered, but ultimately ruling was left to the aristocracy, senate, and emperor.

However, Pinkard writes that, ‘Once the Greeks had put freedom on the map as a way of thinking about justice, there was a push toward justice as equality and as the mutual recognition of the freedom of all, an actualization of the ideal of each being “his or her own person”’.

If we acknowledge that political liberty – the right to contribute to and be part of the political process, to have rights – is an important part of liberty, then it must be true to say that the so-called ‘dark ages’ – between the collapse of the Roman Empire in the 5th century to the Renaissance in the 15th, are a regression.

Historians broadly no longer use the term ‘dark ages’, using the Middle Ages instead, with many pointing to achievements in architecture, agriculture, mining, and more.

Nevertheless, monarchism, absolutism, even the Catholicism of the period, don’t fit well under our broad idea of liberty.

In forms of organisation like monarchy and the medieval church, the right to act, move, worship freely, to contribute towards the decisions that affect your life, are quite clearly restricted in important ways. Social positions are carefully orchestrated from above. Different rights, powers, and privileges are distributed depending on one’s standing and social position. Economic activity, religious freedom, education, and so on, is, or least can always in principle be, commanded from above.

We should look briefly then at four interrelated moments: the Renaissance, the Reformation, the Scientific Revolution, and the Enlightenment.

When Constantinople fell to the Ottoman Empire in 1453, an influx of migrants into Europe led to the discovery of many ancient Greek texts on everything from music and art to politics and philosophy. The resulting Renaissance – impossible without the printing press, invented in 1436 – led to a flourishing of commentary on old ideas and new ideas across the continent.

The ‘discovery’ of America by Europeans in 1492 also revolutionised attitudes of many Europeans – that the world was bigger than assumed, there were more peoples, ideas, possibilities than had been long assumed. It also proved the usefulness of technology – the compass and ship building, in particular.

The Reformation would not have been the same without the Renaissance. The German priest Martin Luther’s rejection of the Pope’s supreme authority set off the reformation across Europe in 1517, encouraging Christians to read the Bible themselves, despite the church forbidding it. No single person or group should have a monopoly on interpreting god’s will.

Protestantism was important because it began to democratise the interpretation of morals and ethics and spirituality. Similarly, the Treaty of Westphalia, signed after the fighting between Catholics and Protestants during the Thirty Years’ War, contained the seeds of the modern idea of the sovereignty of nations, that each nation has the right to determine its own laws, its own course of action. That each, to go back to Aristotle’s phrase, was a ‘law unto himself’.

The Scientific Revolution was happening at around the same time, and by 1700 the world looked very different to how it did in 1400.

Copernicus’s discovery that the earth revolved around the sun rather than the other way around expanded the universe in people’s minds, made the earth just another celestial body, refuted biblical texts, and legitimised the further study of the physical universe. Galileo and Newton revolutionised and formalised the laws of motion and physics, and many began proving that these principles could be applied to innovation through projects like navigational instruments, canal building, architecture, and road improvement. Francis Bacon argued that an inductive method should be used – the careful observation of the world.

All of this led to an interest in and improvement of instruments like the barometer, the telescope, the microscope, the compass, cartography, medical instruments, and on to the steam engine, electricity, and modern engineering.

Paul Hazard places the Enlightenment’s focus on reason as central: ‘Its essence was to examine; and its first charge was to take on the mysterious, the unexplained, the obscure, in order to project its light out into the world. The world was full of errors, created by the deceitful powers of the soul, vouchsafed by authorities beyond control, spread by preference for credulity and laziness, accumulated and strengthened through the force of time’.

Pinkard says that, the major turning point in world history has to do with the advantages gained by modern Europeans who have come to comprehend the “eternal justice” of their world as consisting in a kind of commitment to the equal freedom of all’.

 The Enlightenment, according to many, may have been contradictory, inadequate, misguided – the idea of equal freedom of all conveniently not being applied to colonies, slaves, women, the proletariat – but the question is, despite it taking a painfully slow amount of time, how the nascent animating principles of freedom, justice, equal freedom, that slowly unfolded, complexified, became more forceful, more convincing, more nuanced, from the ancient Greeks, through to the reformation, the scientific revolution, and the enlightenment, and on to things like Marxism, anarchism, decolonisation, human rights, and the debates about freedom and justice today? Is it ideas? Is it economics? Is it innovation? Or is it something else?

I think it’s worth pausing here to reflect on a problem, though. This a common Eurocentric story. And, as we discussed in the Dark Side of History, the expansion of liberties for some led to the domination of others.

I’m not suggesting a simple triumphalist narrative, and there is much to include that traditionally isn’t – the Islamic Golden Age, the prosperity of the Mughal Empire, science leading to pollution as much as new tools.

Furthermore, it is much easier to measure something as distinct as deathrates and violence than it is to measure liberty – what someone sees as liberty varies so much across the world. As we move into the modern era, everywhere, the different methods, technologies, political solutions, languages we have developed for choosing freely to do things has expanded exponentially. So let’s return to our initial question: what is liberty?

The philosopher Thomas Hobbes described some places as having ‘more’ or ‘less’ liberty. Friedrich Hayek said that the ‘poor in a competitive society’ are ‘much more free than a person commanding much greater material comfort in a different type of society’. John Somerville said during the Cold War that in the communist world there was more freedom from the power of private money and periodic unemployment.

A brief look at the history of the concept shows the difficulty in agreeing on what liberty means – whether it can be measured like height or weight.

In his book A Measure of Freedom, philosopher Ian Carter writes that, ‘freedom is the absence of preventing conditions on agents’ possible actions’.

Those ‘preventing conditions’ can be many –  we might be physically prevented, coerced or threatened, unable because of a lack of education or resources – but the broad point is that a measure of freedom is the availability of choices.

You might not be free to climb a mountain if you are incapable, but a better society, I’d argue, is the one that, if that is your choice out of many, you’ll have easier access to the resources, education, time, and energy to do so.

The same can be applied to jobs, health, innovation, cooking, art, religion, travel, politics – a good measure of freedom is one that should be applicable to anything. One that has broad access to scientific research is an improvement on one that doesn’t, one that has the widest availability of ingredients is an improvement on the one that doesn’t, the easiest access to healthcare, etc.

Moving into the 19th century, the new scientific, enlightenment, liberal, rights-based order was becoming dominant throughout Europe. But especially towards the end of the century contradictions began to appear. Was it really capitalism that was responsible for progress? Could capitalism be made more ethical? Could rational state organisation better direct the innovations of science and industry? Could empires be overthrown?

The problem, then and now, is the difficulty in agreeing on the causes of liberty. If we say science – or at least some if it, like medicine, tools, architecture – has been fundamental in improving the lives of most people, then the focus should be to discover, protect, and augment the conditions that led to its rise and proliferation.

Historians of the Scientific Revolution emphasise the activity of academies, collaboration, empiricism, on new ways of reporting experiments as if the reader could witness them – the start of ‘peer review’, the printing press, the availability of information – but the precise conditions are always difficult to agree on.

Another example of this problem comes from the study of the decline of violence. It’s mostly agreed now that there was a decline in homicide and violent crime from the end of the Middle Ages through, roughly speaking, to today. Some – like historian Pieter Spierenburg argue that the cause of this was the monopolisation of state power. As monarchs became more secure and consolidated their authority, the royal court became a politer and more ‘civilised’ place as lords had to jostle for favour, and the monarch was able to capitalise on their power by being more intolerant of volatility. Others have pointed to the rise of commerce and the need for more ‘civil’ interaction between people to make one’s way in life.

On the other hand, the historian Mark Mazower argues that this state monopolisation of power led to the death toll of the two world wars, the Holocaust, and nuclear bombs in the twentieth century, contradicting the story of civil progress.

The point, again, is that the causes of any type of progress are always difficult to identify: just because a monarch imposed order where elite violence would have previously gone unpunished, say, that doesn’t necessarily mean the premise, ‘absolute monarchy causes less violence’, is universally true and so we should support absolute monarchy. This is an error in attribution.

Steven Pinker, who relies heavily on these sorts of arguments in his The Better Angels of Our Nature: Why Violence Has Declined, falls into this trap.

Historian Gregory Hanlon notes that while Pinker is correct to ‘underline the vertiginous drop in violence since the end of the middle ages’, he is also prone to ‘wild exaggeration, hyperbole, junk statistics and reference to fiction as if it were fact’, and that he has, ‘exaggerated, often outrageously, the contrast between then and now’.

And in a particularly damning critique in the introduction to a special issue of History & Theory looking at Pinker’s work, the authors write: the overall verdict is that Pinker’s thesis, for all the stimulus it may have given to discussions around violence, is seriously, if not fatally, flawed. The problems that come up time and again are: the failure to genuinely engage with historical methodologies; the unquestioning use of dubious sources; the tendency to exaggerate the violence of the past in order to contrast it with the supposed peacefulness of the modern era; the creation of a number of straw men, which Pinker then goes on to debunk; and its extraordinarily Western-centric, not to say Whiggish, view of the world’.

Any attempt to make sense of history requires understanding multiple disciplines, has unavoidable ideological biases, and quickly gets very complicated.

That doesn’t mean we should give up – to discern a drop in violence and to roughly identify some causes, to know what encourages scientific discovery, to discern the conditions that have led to  increases in democracy, to know what protects against totalitarianism, to be able to understand, however imperfectly, many other questions like these, is pretty good progress enough, but history is obviously not a story of easy-to-understand simple progress. We try things, get things wrong, give power to the wrong people, go down wrong turnings, we’re prone to accidents and the misuse of ideas, we forget or lose things, new problems develop, freedoms for some lead to catastrophe for others.

This is why Hegel said that the owl Minerva flies at dusk – only in retrospect, as we try and make some sense out of what’s happened.

In 1854 the physician John Snow mapped the houses hit by a cholera outbreak in London. He discovered that the cases centred around one water pump. Snow’s discovery was a huge breakthrough in the prevention of communicable diseases, proving that cholera was not airborne as people thought, but was caught from contaminated water. It led to an unprecedented move towards a focus on sanitation, sewage works, clean water and toilets, and in doing saved countless lives.

Snow looked at the causes of something in the past to make conclusions about how to prevent it in the future. It was this tradition that Alexander Fleming was working in, and one that led to a vast range of advances in health.

History is a scientific discipline. It’s different to, say, physics, but it’s still the study of objects – diaries, letters, newspapers, memos, images – to create an accurate picture of the past – it can be as close as possible to object-ive. And it can still be an attempt to make generalisable patterns from a set of observations. It’s much more open to interpretation than many other disciplines – to find the causes of poverty, the causes of affluence, of happiness – and it’s much more difficult to apply, because we’re not germs or rocks – we respond. But historians have avoided making strong claims about the use of history for policy, politics, thinking about the future, and I think that’s a mistake. We should still use history to understand the likely outcome of scenarios and conditions, to be able to predict what works and what doesn’t.

In the aftermath of the Holocaust many argued it was grotesque to talk about progress, about Hegel, about the cunning of reason. It wasn’t to be made sense of – the unpredictable evil of it disproved progress, disproved an interested benevolent god, disproved the natural goodness of man, disproved a lot things. It left a hole in our human nature.

But if Hegel was right about progress, the idea of the ‘cunning of reason’ is not that the Holocaust was some cunning way of enticing progress, but a horrific veering off from reason that demands instead a reasonable response – how might we avoid something like it happening again?

And since then, there has been a lot of good research on why genocide happens – I’ve explored some of it in this video – research that helps us see the causes and try to institutionalise and culturalise their avoidance, to create inoculations against them in the same way we avoid cholera.

As our ability to influence the world around us as a species grows, the tripwires that we lay become all the more threatening, the stakes are higher; as we become more powerful we become more dangerous to each other. With AI, the Anthropocene, nuclear weapons, the large levers of state power, big capital, we live in a crucial moment, and we must protect against our worse impulses and incentivise our best, or we could, quite easily, trip up and wipe ourselves out. I think all of those threats are not hyperbole, they are very real.

But if we look to how people in the past have capitalised on the possibility for liberty, we have to be cautiously but actively optimistic. I think when we look at the Dark Side and the Progress in history, the word that comes to mind is bittersweet.

 

Bibliography

 

Terry Pinkard , Does History Make Sense,

Matthew White, Great Big Book of Horrible Things

Justin Smith, Irrationality: A History of the Dark Side of Reason

Beard, American Nervousness

Mark Jackson, the Age of Stress

Allan V Horwitz, Anxiety: A Short History

Clive Emsley, Crime and Society in England: 1750-1900, 3rd ed., Harlow: Pearson, 2005

David Taylor, Crime, Policing and Punishment in England, 1750-1914, London: Macmillan Press, 1998

V.A.C. Gatrell, Crime, Authority and the Policeman State

James Le Fanu, The Rise and Fall of Modern Medicine (London: Basic Books, 2012).

George Rosen, A History of Public Health (Baltimore: John Hopkins University Press, 2015).

David Armstrong, Political Anatomy of the Body

Marius Turda, Modernism and Eugenics

David Wooton, Power, Pleasure, and Profit: Insatiable Appetites from Machiavelli to Madison

Dipak Basu, Victorian Miroshnik, Imperialism and Capitalism

Mike Davis, Late Victorian Holocausts

P.J. Cain and A.G. Hopkins, British Imperialism 1688-2015

William Dalrymple, The Anarchy: The Relentless Rise of the East India Company

Philip Dwyer, Violence & Its Histories: Meanings, Methods, Problems

LINKLATER, ANDREW, and STEPHEN MENNELL. “NORBERT ELIAS, THE CIVILIZING PROCESS: SOCIOGENETIC AND PSYCHOGENETIC INVESTIGATIONS—AN OVERVIEW AND ASSESSMENT.” History and Theory

Gregory Hanlon, The Decline of Violence in the West: From Cultural to Post-Cultural History

Susan Neiman, Evil in Modern Thought: An Alternative History of Philosophy

Steven Pinker, The Better Angels of Our Nature

Adorno & Horkheimer, Dialectic of Enlightenment

Donald G. Dutton., The psychology of genocide, massacres, and extreme violence : why ‘‘normal’’ people come to commit atrocities

Kristina DuRocher, Raising Racists: The Socialization of White Children in the Jim Crow South

Hanson, Jon, and Kathleen Hanson. “The Blame Frame: Justifying (Racial) Injustice in America.” Harvard Civil Rights-Civil Liberties Law Review, vol. 41, no. 2, Summer 2006, p. 413-480. HeinOnline.

Stewart E, Tolnay and E.M. Beck, A Festival of Violence, An Analysis of Southern Lynchings, 1882-1930

https://www.ferris.edu/jimcrow/brute/

Jason Stanley, How Fascism Works: The Politics of Us and Them

Ervin Staub, The Roots of Evil

James La Fanu, The Rise and Fall of Modern Medicine

John Henry, The Scientific Revolution and the Origins of Modern Science

Ian Carter, A Measure of Freedom

The post The Light Side of History appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/10/26/the-light-side-of-history/feed/ 1 859
How Immigrants Became ‘Bad’ https://www.thenandnow.co/2023/10/16/how-immigrants-became-bad/ https://www.thenandnow.co/2023/10/16/how-immigrants-became-bad/#respond Mon, 16 Oct 2023 13:00:08 +0000 https://www.thenandnow.co/?p=976 When Tucker Carlson told viewers of Fox that immigration would ‘dilute’ the political power of Americans, when Trump told Americans immigrants were sending their worst, they had a well of unscientific history to draw from. It’s a history that attempts to pin people down, categorise and classify them, hold them in place, bar and banish […]

The post How Immigrants Became ‘Bad’ appeared first on Then & Now.

]]>
When Tucker Carlson told viewers of Fox that immigration would ‘dilute’ the political power of Americans, when Trump told Americans immigrants were sending their worst, they had a well of unscientific history to draw from.

It’s a history that attempts to pin people down, categorise and classify them, hold them in place, bar and banish them, despite what science is increasingly showing us: migration is the norm. Immobility is abnormal.

Liberalism – the assumptions of which many of us live under – prioritises individual freedom, of thought, of expression, of movement.

But at the same time we think of migration – which is free movement – as abnormal.

We even mythologise a sedentary past – of villages, farmers, peasants, ‘tied to the land’, living and dying in the place where they’re from.

Yet in the 17th century, around 65% left their home parish at some point in their lives.

We have what philosopher Alex Sager calls a ‘sedentary bias’.

The migrant is presented as a problem, alien, outsider, yet we move around our own countries – commuting, deciding to live elsewhere, holidaying, visiting relatives, making work trips – without thinking it’s in any way strange.

We are, as a species, mobile, nomadic, built to move.

In 2020, you could count 280 million migrants and each year around a billion tourists. And the numbers are increasing.

But so are the objects, ideas, and phenomenon – borders, passports, guards, barbed wire, nationalist rhetoric – that attempt to pin us in our place.

Can we find a genealogy of our attitudes? A history of our present problem? To do so, we might start with the 18th century biologist Carl Linnaeus.

Linnaeus was born in Sweden in 1707 during a period when Europeans had been exploring the globe and returning with stories of strange places, peoples, and creatures. Some – like Arnoldus Montanus – wrote and illustrated books about these bizarre alien lands without ever leaving the comfort of home. Zoos, museums, galleries, and menageries exhibited these incredible new foreign curiosities.

Linnaeus – always fascinated by the natural world – wanted to contribute to scientific understanding of the planet’s great biodiversity.

He came up with a system of simple categorisation – a taxonomy.

He’d give each species two names in Latin. The first a general category, the second a specific one.

Linnaeus divided species into classes, genus, species, depending on a number of characteristics including where they were found.

He published his revolutionary book Systema Naturae in 1735.

But when it came to humans, Linnaeus faced a problem. How would the different races of humans fit into his taxonomy?

The Bible told us that all humans were created by God and descended from Adam and Eve. They must be the same.

But the prevailing consensus at the time was that non-European peoples were primitive, savage, and biologically different.

Voltaire had written that, ‘the Negro race is a species of men as different to ours as the breed of spaniels is from that of greyhounds’.

Linnaeus had a rival.

Georges-Louis Leclerc, Comte de Buffon was also a naturalist.

In opposition to Linnaeus, though, De Buffon believed that instead of adhering to strict categories, nature was dynamic, changing, in flux.

He thought humans had migrated and adapted to local conditions as they moved around the planet.

Like almost everyone at the time, De Buffon still believed in a hierarchy. The farther from the Garden of Eden humans had moved, he thought, the more their biology degenerated.

He published his own book Histoire Naturelle in 1749. It was a Europe-wide success.

But Linnaeus’ celebrity grew.

Species couldn’t degenerate that much, he retorted to de Buffon. It was blasphemy. Species – including humans – were born, lived, existed, precisely where god had intended them to.

‘It is impossible’, Linnaeus wrote, ‘that anything which has ever been established by the all-wise Creator can ever disappear’.

By the 10th edition of Systema Naturae Linnaeus would classify 8000 plants and 4000 animals, including several races of humans:

Homo troglodytes, from the Antartic, can eat raw flesh.

Homo caudatus, of Borneo and Nicobar, had tails.

Homo monstrosus, from lapland, included giants and dwarfs.

Homo sapiens europaeus were ‘white, serious, and strong’, ‘active, very smart, inventive’.

Homo apeins asiaticus were ‘yellow, melancholy, greedy’.

Homo sapiens americanus were ‘ill-tempered’ and ‘obstinate’.

And homo sapiens afer, from Africa, were impassive, lazy, crafty. Slow, foolish, and ruled by caprice.

The idea of these biological distinctions between races dominated European science, developing across the 19th century into a new field: race science.

This 10th edition of Systema Naturae was a triumph and became accepted over de Buffon’s interpretation of nature. Louis XV ordered it official.

Rousseau said he knew of no greater man on earth.

After Darwin published On the Origin of Species, he argued that environmental differences had resulted in adaptions seen in humans.

But race scientists argued that there were clear fundamental biological difference. Darwin quickly became side-lined and descended into despair. He had episodes of hysterical crying. As he lost his influence many scientists who adopted the subspecies view believed him to be crazy and ignorant.

How could single species travel so far around the planet? How could ancient Israelites have reached the Pacific Islands?

These were clearly separate biological races.

Darwin performed experiments submerging seeds in water to see if they could survive long journeys, and getting fish and birds to eat them, retrieving them from their droppings, and seeing if they still germinated.

But the human subspecies view won the day. The Natural History Museum in London displayed models of different human species. The Bronx Zoo had a similar display on the ‘Races of Man’. They kept a man from Congo – Ota Benga – in the monkey house where visitors watched him play. He was only released in 1906.

It was clear to all that god and science had intended a separate, distinct, biological hierarchy of man.

The separation of humans into a hierarchy of  species almost logically and naturally led to a global – or at least Western – concern: degeneration, the mixing of genes, the dilution of hereditary superiority.

Darwin’s cousin, Francis Galton, led a new movement: eugenics. Policy makers, he argued, should focus not on education or investment but on breeding good, pure citizens.

Through the Galton Society, scientists warned of the impact of mass-migration, of racial contamination.

Many US states banned interracial sex and marriage in the late nineteenth century.

Biologist Charles Davenport warned that Americans could ‘rapidly become darker in pigmentation, smaller in stature, more mercurial, more attached to music and art’,  and ‘more given to crimes of larceny, kidnapping, assault, murder, rape and sex-immorality’, if races mixed.

President Coolidge wrote about the ‘biological laws’ that ‘tell us that certain divergent people will not mix or blend’. America, he declared after signing a bill to restrict immigration, must be kept American.

University courses on eugenics skyrocketed. Passports and identification documents became more common.

The US closed its borders to migrants for the first time in its history. Immigrants had to take intelligence tests at Ellis Island.

Immigration into the States declined from around 800,000 a year in 1921 to 100,000 after 1929. Ellis Island closed in 1954.

Even ships of refugees fleeing from the Nazis were turned back. One ship – the St Louis – reached Florida and was sent back to Europe. 254 of its passengers died in the Holocaust.

Nazis, most obsessed with purity, even advocated for the destruction of foreign plants in Germans’ gardens. Himmler issued landscaping rules that banned any non-native species.

A popular BBC series and 1958 book The Ecology of Invasions by Animals and Plants warned about protecting domestic species against invading alien ones.

The foundation of all of this – the belief in biological distinction – would persist for centuries. When, after the Holocaust, the UN released a statement that condemned racial distinctions, leading scientists protested.

Leading British scientist, W.C. Osman Hill wrote, ‘I need but mention the well-known musical attributes of the Negroids and the mathematical ability of some Indian races’.

Evolutionary biologist Julian Huxley also pointed to the ‘rhythm-loving Negro temperament’.

83 of 106 anthropologists refused to sign the UN statement.

In 2018 the US Citizenship and Immigration Services changed its mission statement from ‘Fulfilling America’s promise as a nation of immigrants’ to ‘securing the homeland’.

The twentieth century might be looked back on as the century we rediscovered movement. Advances in technology led to an almost unbelievable expansion of railways, roads, airports, and even space travel.

In art, the impressionists like van Gogh had already tried to bring back movement into still images.

Film and radio developed.

Philosophers like Deleuze brought the idea of change, movement, dynamism back into a field he thought had become too static, too representational.

But Linnaeus’ belief that species were native to specific locations continued throughout the twentieth century. No-one believed that humans – let alone many animals – could have dispersed so far and wide across the globe. Creatures couldn’t migrate from Africa to the Pacific Islands. They couldn’t swim thousands of miles. Species had to have evolved separately.

It took technology only invented in the late twentieth century – GPS and modern DNA analysis in particular – to discover a fact that shocked scientists: around half of all species aren’t sedentary, they’re on the move.

And it’s only in the last couple of decades that the real extent of this discovery is becoming clearer.

Animals migrations are incredibly difficult to study. Even harder to understand is our prehistoric past. Tracking technology was heavy, expensive, and unable to be used at long distances. Solar-powered GPS tags changed this.

Suddenly, researchers have been tracking migrations on a scale no one ever suspected. 70,000 km migrations of terns. Zebras walking over 500km, crocodiles swimming 200 miles out to sea, dragonflies flying hundreds of kilometres a day. Everything from sharks to wolves migrating thousands of miles.

A new field of study – movement ecology – rapidly developed.

This video from Movebank logs the movement of 8000 animals fitted with GPS tags: https://www.youtube.com/watch?v=nUKh0fr1Od8&ab_channel=Movebank

Linnaeus’s ideas about using the geographic location in a species’ name has become, for the first time, unreliable. The natural world is much more fluid than we ever realised.

Only in the 1980s did modern DNA analysis finally prove that homo sapiens were one species with a common ancestor. In 2000, the Human Genome Project found that differences between us accounted for about 0.1% of our gene sequence.

As journalist Sonia Shah points out, migration is so common that it’s pointless asking why people migrate, but rather, we should ask why anyone stays in the same place.

She writes ‘migration is encoded in our bodies, just as it is in wild species’. It’s a force of nature, a fact of life, built into biology itself.

Yet despite this, we’re increasingly trying to stop it, thinking of humans as naturally sedentary rather than biologically dynamic.

In 1945 there were just five border walls in the world.

By 1991, there were still only 19.

In 2016 there were 70.

North Korea encages its people. India fences itself off from Pakistan and Bangladesh. Tunisia has built trenches filled with water along its border with Libya. In Hungary, prisoners were used to build a fence along its border with Croatia. Israel uses razor wire, sensors, and infrared cameras. Britain and France have increased the fencing at the channel tunnel. And Trump’s border wall lengthened the US-Mexico barrier by almost 500 miles.

However, walls, as Wendy Brown has argued, are more effective as political theatre and rhetoric than preventing the flow of migration. Instead, they just send migrants through different routes, they create an underground smuggler economy which increases crime, and ultimately make migrant routes more dangerous.

And, of course, they impose an artificial order on what – as we’ve seen – is a natural global phenomenon found in every species.

Our nationalist bias, our sedentary bias, makes these things appear natural, the way the world is, the way it has to be, while often obscuring the complexity of borders as a phenomenon.

They separate families, cut off jobs, and always imply the violence needed to defend them.

For a rich person, borders often signify excitement, adventure, holiday, vacation. For poor countries a border means something entirely different: a prison, a limit, an obstacle.

Jonathan Moses has argued that we could even draw an analogy between international borders and apartheid.  Moses asks, ‘Why is the Dane’s advantage over the Somalian legitimate (and protected by international law), while the Afrikaner’s advantage over the Xosi was not?’

For millennia, migration was a part of human life, all life. Slow but steady. Science and technology have had a strange effect on that history. Inductive science – the careful study of the world – has tended, historically, to collect evidence in a snapshot, at a specific point in time, and then announces that it has found a universal truth. It finds people where they are, and presumes that’s where they belong. And just as scientific racism pinned everyone down, technology sped everyone up, leading to a contradiction that both builds walls and encourages more movement.

And this contradiction is only going to become more pronounced.

Between 2008 and 2014, floods, storms, earthquakes, and other disasters displaced 26 million people around the world. In 2015 alone, 15 million people were forced to flee wars. In that year, a million of them migrated across the Mediterranean.

When we look at these people, we tend to take the ‘states’ that they are moving between, moving through, as the natural unit of analysis. That those people are misfits or aliens, in or out of a container.

We tend to take the state as the natural unit of analysis.

But as Ulrich Beck has noted, as we become more global, as the world becomes quicker and more connected, ‘the unity of national state and national society comes unstuck; new relations of power and competition, conflict and intersection, take shape between, on the one hand, national states and actors, and on the other hand, transnational actors, identities, social spaces, situations and processes’.

Scientific racism, human taxonomy, the state, border walls, guards, passports, global inequality, all hide the fact that not only are we all migrants in our bones, but that increasingly, we are globalised ones, with many more options and desires than ever before. It’s not only the possibility of more global displacement from disasters, wars, poverty, or climate change, but more as we all become more mobile, dynamic, international.

We should focus on ways not just to facilitate this, but to encourage it, to make it more efficient, easier, more dynamic.

The UN’s Global Compact for Safe, Orderly and Regular Migration, for example, encourages international support to do just this.

We all know that we want to move around the world as we wish – for work, for vacation, to see family – but we rarely reflect on the contradiction and injustice that makes this possible for many of us but impossible for many others.

As centuries of naïve and crude pseudoscience get refuted, as we rediscover movement, mobility, and our migrant impulse, should we not be trying not to build walls, but to realise that we’re all on the move.

 

Sources

Sonia Shah, The Next Great Migration

Alex Sager, Towards a Cosmopolitan Ethics of Mobility

https://www.theguardian.com/media/2018/dec/18/tucker-carlson-immigrants-poorer-dirtier-advertisers-pull-out

The post How Immigrants Became ‘Bad’ appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/10/16/how-immigrants-became-bad/feed/ 0 976