My unexpected Pride icon: Fast & Furious is my favourite camp classic

1 Share

Any film where cast members talk about chosen family and Dwayne ‘the Rock’ Johnson busts a cast off his broken arm by flexing his biceps has a place in the gay canon

I am a 42-year-old lesbian who can’t drive. And, since I’m baring all, I will add that I loathe people who drive extremely fast in obnoxiously large cars. Which, unfortunately, seems to be every third person in the US. In short, I’d wager I’m probably not the target audience for the Fast & Furious films.

I’m sure I don’t need to explain the blockbuster franchise to you: the first instalment came out in 2001 and the series has generated billions. But if you are somehow unfamiliar with them, the basic premise is that a ragtag team of misfits and street racers travel around the world, driving cars fast and furiously, beating up baddies.

Continue reading...
Read the whole story
PaulPritchard
117 days ago
reply
Belgium
Share this story
Delete

French drivers sue Tesla over Elon Musk’s right-wing politics

1 Share

A coalition of 10 Tesla owners in France is suing the American electric vehicle company over CEO Elon Musk’s former role in the White House and support of the far right in Europe.

“They don’t want to be associated anymore with Tesla or personified by Elon Musk and his recent political stances,” said Ivan Terel, a partner for GKA, a Paris law firm representing the owners.

Tesla sales plummeted across Europe after Musk spoke at a rally for the far-right Alternative for Germany (AfD) party in January, telling the crowd it was time for Germany to “move on” from its Nazi past, which sparked calls to boycott the EV brand.

In France alone, Tesla sales declined 67 percent in May compared to the same month in 2024, according to data from the country’s PFA registrar. The impact isn’t limited to boycotts, though, with vandals targeting dealerships and individual Tesla drivers finding their cars damaged.

Several of Terel’s clients are among those whose vehicles were vandalized, with one discovering a swastika painted on the side and another finding it defecated on, he said.

GKA filed the suit in the Paris Commercial Court on Wednesday, seeking to have the vehicle leases voided and for its clients to be repaid the original cost of ownership, plus other damages.

The lawsuit uses an article from the French civil code that states the seller of an item must “guarantee clients a peaceful use of the goods sold,” Terel said, calling it an old law that is being used in unprecedented circumstances due to Musk’s role in the White House.

Musk was a key ally of U.S. President Donald Trump in the first 100 days of his second administration, leading the American government’s drive to slash federal spending. But the relationship between the two tycoons quickly soured, leaving Musk out in the cold.

A commercial court judge will decide if the case has merit and whether others can join the lawsuit in the interim. Several other Tesla owners have inquired about the case since GKA filed suit, according to Terel.

Read the whole story
PaulPritchard
118 days ago
reply
Belgium
Share this story
Delete

Israeli government issuing ‘illegal’ orders that must not be obeyed, say IDF intelligence officers

1 Share

In letter to Netanyahu, 41 signatories say hostages given ‘death sentence’ and Israel waging ‘unnecessary’ war

Israel’s government is issuing “clearly illegal” orders that must not be obeyed, a group of Israeli military intelligence officers have said, announcing they will no longer participate in combat operations in Gaza.

In a letter addressed to Israel’s prime minister, Benjamin Netanyahu, the defence minister, Israel Katz, and the head of the military, the group of 41 officers and reservists said the government was waging an “unnecessary, eternal war” in Gaza.

Continue reading...
Read the whole story
PaulPritchard
118 days ago
reply
Belgium
Share this story
Delete

America Is a Failed State with Good Branding

1 Share
America Is a Failed State with Good Branding

The American state was built for a different scale, a different economy, a different kind of citizen. Its machinery was designed to serve thirteen colonies and agrarian interests - not 330 million people governed through platforms, proxies, and predictive sentiment models. Somewhere along the way, the civic infrastructure stopped scaling. Then it started slipping. Now it’s permanently behind.

The institutions still exist. They hold meetings, revise mission statements, send out newsletters - some of them on Substack, apparently. But their capacity to act has withered. What remains = performative continuity.

The facades are intact. The flags wave. The elections proceed. The agencies function. But beneath it all, an administrative sclerosis has set in, accumulating like plaque in the arteries of the state. The system hasn’t crashed. It’s coagulated.

This is how decline manifests in developed nations: not with explosions, but with bottlenecks. Not with fire, but with forms.

Collapse doesn’t always arrive as spectacle. Sometimes it arrives as stagnation, repeated so often it becomes tradition. Institutions persist in form but erode in function. Capacity degrades incrementally, unnoticed, until dysfunction is mistaken for normalcy - until people begin to believe this is just how things work.

By the standards of political science - loss of state capacity, erosion of legitimacy, failure to deliver basic services - the United States has already failed. Not theoretically. Operationally. The only thing keeping the system upright is the myth that it can’t fall.

This isn't about Donald Trump's second term.

It's not even about his first.

It's not about who holds the office.

It's about a decline that started decades ago.

What Does It Mean to Fail?

The Fund for Peace’s Fragile States Index evaluates countries using twelve indicators across cohesion, economics, politics, and society. These include things like factionalized elites, uneven development, group grievance, state legitimacy, public services, and external intervention. The list is clinical, almost algorithmic. Plug in U.S. metrics, and it performs well. At least, relative to Sudan.

But does its score reflect reality?

Congressional approval was 8% last year. Confidence in the Supreme Court is in freefall. Faith in electoral integrity is negative-sum: each half of the electorate believes the other side’s votes are illegitimate. Public trust in media, science, and government is at or near all-time lows. The idea of a shared national consensus is a museum piece.

America cannot build high-speed rail, deliver clean water in major cities, or prevent bridges from collapsing. Life expectancy is declining. Medical debt is the leading cause of bankruptcy. Public schools are under siege from both underfunding and politicization. Disaster relief is Twitter-based triage. None of these are surface flaws. They are structural.

So why doesn’t America appear on the Fragile States Index as a failure? 

Because it still exports liquidity, culture, and legitimacy at scale. And because the definition of a failed state implicitly assumes you’re not wearing a Rolex.

Bureaucratic Paralysis as National Identity

The California High-Speed Rail project, originally proposed in the 1980s, is now a kind of bureaucratic performance art. As of 2024, it has cost over $100 billion and still doesn’t connect San Francisco to Los Angeles. A single environmental review document ran over 25,000 pages. 

Compare this to China, which built 40,000 kilometers of high-speed rail in roughly the same time period. This isn’t authoritarian efficiency, as tempting as that proposition may be as an excuse conferring moral superiority. It’s a different relationship to modernity. America’s is defensive. Each infrastructure project must survive lawsuits, political sabotage, media cycles, and Byzantine procurement rituals.

The French historian Marc Bloch wrote that institutions, like people, are slow to realize they are dying. America’s infrastructure systems aren’t dead. But they are entombed in legal amber, incapable of adaptation.

A Monopoly on Violence

Max Weber famously defined the state as the entity that claims a monopoly on the legitimate use of force. The United States has taken this concept and franchised it. Police forces operate with near-complete autonomy. ICE agents stage raids with minimal oversight. Private military contractors have carried out operations abroad and at home. During protests, cities borrow gear from the Pentagon and behave like occupying forces.

There are nearly 18,000 law enforcement agencies in the U.S. They don’t follow the same rules. They’re not interoperable. They compete for federal funding and attention. This isn’t monopoly. It’s a decentralized patchwork with wildly uneven accountability. In some towns, the sheriff is a small-town administrator. In others, he’s a warlord with a badge.

When George Floyd was murdered, the international outcry treated it as an exceptional horror. It wasn’t. It was a statistically common interaction between Black citizens and armed state actors. The exception was that someone filmed it.

If a failed state is one where the government can’t restrain its own forces or deliver justice to victims, America’s status should be clear to any rational observer. 

Seeing Like a Collapsing State

James C. Scott’s “Seeing Like a State” argues that modern governments require legibility: simplified, abstracted views of society that allow bureaucracies to manage complex systems. But what happens when the people running the state can no longer read the map?

In the U.S., basic functions - like issuing driver’s licenses, running elections, or disbursing unemployment insurance - depend on 40-year-old software. When California's unemployment system failed during COVID, it lost $30+ billion to fraud. Not because of corruption, but because no one understood the codebase anymore.

The federal government still uses COBOL. State governments outsource core functions to private contractors who subcontract to other contractors, until no one is accountable and the system becomes a black box. Voters receive ballots that are illegible. Public health advisories contradict each other. Wildfires go uncontained because jurisdictions fight over whose map is “real.”

A state that cannot make itself legible to its citizens, or itself, does not govern. It lurches.

The Trident: Dollar, Myth, Interface

Why does it still appear to work? Why do we still uphold America as The World’s Oldest Democracy? The answer lies in three forms of structural suspension.

First, the dollar. As the global reserve currency, the dollar lets America borrow at fantasy rates, run deficits without punishment, and export inflation. It is the ballast beneath a leaking ship. When you can print the world’s money, your collapse timeline gets extended. But not indefinitely.

Second, the myth. American exceptionalism is a civic religion. The Founders are saints. The Constitution is scripture. The rituals of democracy - flag pins, Fourth of July fireworks, elementary school civics lessons - provide continuity even as the content they supposedly anchor decays.

Third, the interface. America still *feels* like a country. You can go to DMV websites, call 911, vote, argue on Reddit, see Supreme Court headlines. It’s like using a beautifully skinned app whose backend database has been corrupted for years. The front-end performance persists. The core logic does not.

Together, these three props allow for a condition we might call “velvet failure”: the soft, slow erosion of governance that looks like normality but functions like managed decline.

A Slow-Motion Hegemonic Implosion

To paraphrase (as much as I'd dare) Historian Ernest Gellner, nationalism is what happens when a state and a culture disagree about whose borders contain whom. America’s version of that disagreement is internal. The state believes it governs a united polity. The culture has already seceded into a hundred overlapping simulations - religious, ideological, algorithmic.

In some simulations, America is an empire. In others, a failed experiment. In others still, a corporation with a military. These simulations do not resolve. They coexist, and in doing so, render collective action impossible.

What happens when a country cannot update its shared reality? When every major institution is in epistemic freefall? When its myths are pristine, but its outputs are broken?

The answer: it keeps going. Until it can’t.

You can fool the world with good branding. You can even fool your own citizens. But - in popular parlance - the body keeps the score. Eventually, the trust deficit becomes a debt that can’t be rolled over. Eventually, the dollar loses its place. Eventually, the lights go out.

It won’t be sudden. Collapse is rarely cinematic. It’s procedural. It looks like decaying buildings and delayed permits and missing antibiotics. It sounds like voicemail systems and lawsuits. It feels like acceleration without agency.

America is not going to become Venezuela. It’s going to become something stranger: a state that performs itself convincingly even as it forgets how to function. A simulation of governance wrapped in prestige television.

And when the myth runs out, and the branding fades, what remains?

A failed state. With very good fonts.

The Index is a reader-supported, indie publication.

Now, more than ever, the world needs an independent press that is unencumbered by commercial conflicts and undue influence.

By taking out an optional founding membership, you can help us build a free, accessible, independent news platform firewalled from corporate interests.

Support The Index


Read the whole story
PaulPritchard
118 days ago
reply
Belgium
Share this story
Delete

France considers requiring Musk’s X to verify users’ age

1 Share

The French government is considering designating X as a porn platform — a move that will likely have the platform implementing strict age verification requirements.

Such a designation could effectively ban children from accessing the social media app unless it curtailed adult content. Paris has recently upped its efforts to protect kids online by requiring age verification by porn platforms.

“X has indicated since 2024 that it accepts the distribution of pornographic content. It must therefore be treated as such,” Digital Minister Clara Chappaz’s office told POLITICO.

Her team has been tasked with “examining the designation of X in the decree concerning pornographic sites that must verify the age of their users.”

The confirmation follows an appearance by Chappaz on French TV show “Quotidien” on Thursday evening, where she said X will soon receive “the same pretty papers as YouPorn” instructing X to ban adult content or implement age screening.

Porn platforms serving content in France are required to implement age verification measures with a final deadline of June 7, although some are protesting.

Failure to comply could see sites fined, delisted from search engines or blocked completely.

Last week Tanzania blocked X over the presence of pornographic content. Last year Belgium’s media regulator expressed concerns that Elon Musk had turned X into a porn website.

X did not immediately respond to POLITICO’s request for comment.

Read the whole story
PaulPritchard
119 days ago
reply
Belgium
Share this story
Delete

Trusting your own judgement on 'AI' is a huge risk

1 Share

(This is loosely based on a couple of social media threads I posted last week, made longer and more tedious with added detail.)


One of the major turning points in my life was reading my dad’s copy of Robert Cialdini’s Influence: The Psychology of Persuasion as a teenager.

Other highlights of my dad’s library – he was a organisational psychologist before he retired – included books by Fanon, Illich, and Goffman and a bunch on systems thinking and systems theory so, in hindsight, I was probably never not going to be idiosyncratic.

But Cialdini’s book was a turning point because it highlighted the very real limitations to human reasoning. No matter how smart you were, the mechanisms of your thinkings could easily be tricked in ways that completely bypassed your logical thinking and could insert ideas and trigger decisions that were not in your best interest.

He documented tactics and processes used by salespeople, thieves, and con artists and showed how they could manipulate and trick even smart people.

Worse yet, reading through another set of books in my dad’s library – those written by Oliver Sacks – indicated that the complex systems of the brain, the ones that lend themselves to manipulation and disorder, are a big part of what makes us human.

But to a self-important asshole teenager, one with an inflated sense of his own intelligence, Cialdini’s book was a timely deflation as it was specifically written as a warning to people to be careful about manipulation.

He described it as a process where the mechanics of cognitive optimisation in the brain could be deceptively triggered into a semi-automatic sequence that bypassed your regular judgement – his very eighties metaphor was like that of a tape in your mind that went “click whirr” and played in response to specific stimuli.

These are what I tend to call psychological or cognitive hazards. Like the golfer’s sand trap, the only way to win is to avoid them entirely.

  • Intelligence is not a defence. Most of these mechanisms and situations feel reasonable and intelligent in the moment.
  • Willpower, under these circumstances, is a myth. You can’t “will” your way to resisting these tactics. The only defence is avoidance.

What made me especially receptive to this idea at the time was the recent experience of having been sold a shitty CD player – that was obviously a sub-par model – by an excellent salesman.

“I’m smart. I’m a nerd who knows electronics. Why did I fall for these tricks?”

Because we all do is why.

Of course Cialdini later went on to focus more on teaching people how to use manipulative tactics to convince and sell, but his later lapse of ethical judgement doesn’t disprove the value of his groundbreaking initial book.

The reason why I’m outlining just how weird I was as a teenager and a young man is that software developers in particular are prone to being convinced by these hazards and few in the field seem to have ever had that “oh my, I can’t always trust my own judgement and reasoning” moment that I had.

A recent example was an experiment by a CloudFlare engineer at using an “AI agent” to build an auth library from scratch.

From the project repository page:

I was an AI skeptic. I thought LLMs were glorified Markov chain generators that didn’t actually understand code and couldn’t produce anything novel. I started this project on a lark, fully expecting the AI to produce terrible code for me to laugh at. And then, uh… the code actually looked pretty good. Not perfect, but I just told the AI to fix things, and it did. I was shocked.

(If you don’t know what I mean with “an auth library” just know that it’s the most security sensitive and attacked point of any given web service. The consequences of a security flaw in this kind of library are potentially huge.)

Self-experimentation is gossip, not evidence #

The debate surrounding this has been fairly typical for software development. The authors claimed that an Large Language Model (LLM) agent let them build it faster and more reliably than otherwise, many in software dev are convinced that this is powerful evidence that these tools really work.

It’s not, for a good reason, but it’s also important to note the process here that bypasses the judgement of even smart people.

First off, that project is a single person acting without any control. It has the evidentiary value of a blog post claiming that echinacea cured their cold complete with bloodwork showing no cold virus. That neither proves or disproves the value of echinacea. It’s just gossip.

When all you have is gossip (software development research is not great as it’s genuinely a separate problem domain from computer science) you have to make do with it – the debate about types, TypeScript, and web development is, for example, largely anecdotal gossip not backed by much in terms of structured research – but when you’re trying to answer a question with huge ramifications, you really want proper research.

TypeScript, for those who aren’t familiar, is a Microsoft rewrite of JavaScript that’s incompatible with basic JavaScript in multiple ways and has a long history of breaking projects when updated.

It looks more like an “enterprise” programming language for large institutions, but we honestly don’t have any evidence that it’s genuinely more suitable for those circumstances than the regular JavaScript.

And it makes your project and business directly dependent on Microsoft, which is never ideal.

The decision to use TypeScript over JavaScript, despite there not really being any evidence available that doing so will make the overall system safer and more productive, is relatively harmless. It won’t kill people. It won’t disenfranchise anybody. It won’t lead anybody to being de-banked. It won’t deprive anybody of their right to an education.

Pretty much everything used to argue for or against TypeScript is either from self-experimentation or from anecdotal stories about other people’s self-experimentation.

And that’s where the psychological hazard comes in.

Self-experimenting with psychological hazards is always a bad idea #

Self-experimentation is exactly how smart people get pulled into homeopathy or naturopathy, for example. It’s what makes them often more likely to fall for superstitions and odd ideas. The smart person’s self-identity means they can’t believe their own psychological biases are fooling them.

Don’t self-experiment with psychological hazards! I can’t stress this enough!

There are many classes of problems that simply cannot be effectively investigated through self-experimentation and doing so exposes you to inflicting Cialdini-style persuasion and manipulation on yourself.

Consider homeopathy. You might hear a friend talk about “water memory”, citing all sorts of scientific-sounding evidence. So, the next time you have a cold you try it.

And you feel better. It even feels like you got better faster, although you can’t prove it because you generally don’t document these things down to the hour.

You come away feeling much less sceptical about homeopathy.

“Maybe there is something to it.”

Something seemingly working is not evidence of it working.

  • Were you doing something else at the time which might have helped your body fight the cold?
  • Would your recovery have been any different had you not taken the homeopathic “remedy”?
  • Did your choosing of homeopathy over established medicine expose you to risks you weren’t aware of?

That last part is important as we have, as humans, an extremely poor understanding of probability.

Something that happens one per cent of the time doesn’t register with us. Those odds leave us with an impression that hardly anybody we know has encountered the phenomenon, while it is in fact quite quite common.

Poor treatment of a cold can lead to bronchitis or pneumonia and pneumonia can be lethal.

So a combination of an innocuous disorder and naive self-experimentation can literally kill you.

It’s unlikely, but the chances of it happening are not down to your intelligence or willpower, but is largely dependent on luck.

One of the reasons why I wrote the original LLMentalist post two years ago was I wanted people to understand that chatbots and the like are a psychological hazard.

Experimenting with them can lead to odd beliefs and a serious misunderstanding of both how you and the chatbots work.

You can’t trust your own instincts or judgement about Large Language Models and chatbots because they trigger a number of cognitive biases and psychological “effects” that short-circuit our judgement.

Again, never self-experiment with psychological hazards. It can take you years to unwind the damage.

We need science #

Our only recourse as a field is the same as with naturopathy: scientific studies by impartial researchers. That takes time, which means we have a responsibility to hold off as research plays out, much like we do with promising drugs, but the research is also stalled by the bubble.

Impartial research on “AI” is next to impossible at the moment. It’s like we had a new wonder drug on the market but we had no way of knowing if its risk dynamic is Thalidomide (brutal and horrible deformities), Paracetamol (low lethal overdose threshold), or Penicillin (systemic overuse destroys its effect).

Coming back to the project I cited at the start, there are a number of questions that it can’t answer that are kind of important for you to gauge its validity:

  • There are no controls or alternate experiments. We only have the experimenter’s word for this being faster than if he had done it himself.
  • We also don’t know if he’d gotten better or worse results by forking a related or similar library and rewriting it to do what he needed.
  • We also have to take their (Cloudflare’s) word for it that this is actually code of an equal quality to what they’d get by another method. There’s reason to believe it was not, given the beginner nature of the security defects that have already surfaced.
  • And, finally, we don’t know if they were just plain lucky playing roulette with a nondeterministic system.

This is the original sin of software dev: it’s a pop culture where we’re trained to accept gossip as evidence.

That’s fine if you’re debating largely meaningless details like “JavaScript, threat or menace?” but it’s very risky when the system in question is a psychological hazard built out of a Jenga tower of biases and Forer effects.

When that cluster of psychological hazards is being adopted as a solution for everything, everywhere, all at once, the consequences of letting yourself be conned by it are potentially enormous.

Marks become hazards in their own right #

A big risk of exposure to con artists, such as a psychic, is when a smart person is fooled by their own subjective experience and cognitive blindness to probabilities and distribution, refuses to believe they were fooled, and becomes and evangelist for the con.

This happens all the time with psychics, homeopathy, and faith healers. I wrote The LLMentalist Effect with a warning about exactly this problem:

Remember, the effect becomes more powerful when the mark is both intelligent and wants to believe. Subjective validation is based on how our minds work, in general, and is unaffected by your reported IQ.

If anything, your intelligence will just improve your ability to rationalise your subjective validation and make the effect stronger. When it’s coupled with a genuine desire to believe in the con—that we are on the verge of discovering Artificial General Intelligence—the effect should both be irresistible and powerful once it takes hold.

This is why you can’t rely on user reports to discover these issues. People who believe in psychics will generally have only positive things to say about a psychic, even as they’re being bilked. People who believe we’re on the verge of building an AGI will only have positive things to say about chatbots that support that belief.

And it’s not just limited to instilling a belief in the imminent arrival of Artificial General Intelligence. Subjective validation can be triggered by self-experimentation with code agents and chatbots. From the ever useful Wikipedia:

Subjective validation, sometimes called personal validation effect, is a cognitive bias by which people will consider a statement or another piece of information to be correct if it has any personal meaning or significance to them. People whose opinion is affected by subjective validation will perceive two unrelated events (i.e., a coincidence) to be related because their personal beliefs demand that they be related.

Using these tools in a highly subjective context as a self-experiment will make the result feel correct simply by virtue of it being born from your personal environment. Coincidences and random sequences will be seen as cause and effect.

This is why it’s unsurprising that somebody who personally stands to benefit from the “AI” Bubble – he is an early employee for a hosting startup – sincerely believes his self-inflicted con:

I think this is going to get clearer over the next year. The cool kid haughtiness about “stochastic parrots” and “vibe coding” can’t survive much more contact with reality.

My AI Skeptic Friends Are All Nuts

I don’t recommend reading it, but you can if you want. It is full of half-baked ideas and shoddy reasoning.

  • This harm is fine because we regularly hurt people that way anyway.
  • False statements about quality and reliability that have been disproven numerous times in studies.
  • Legitimate concerns are countered simply with a command to shove it up your arse.
  • Mediocrity is fine. Normal even. No need for anything better.

There’s more. Much more. It’s mostly nonsense, which is why I don’t recommend reading it.

But one reason to highlight the shoddiness of it’s argument is that calls from authority figures are a cognitive hazard in and of themselves and if you aren’t familiar with how deceptive personal experience is when it comes to health, education, and productivity, you might find the personal, subjective experience of a notable figure in the field inherently convincing.

Even otherwise extremely sensible people fall for this, like Tim Bray:

I keep hearing talented programmers whose integrity I trust tell me “Yeah, LLMs are helping me get shit done.” The probability that they’re all lying or being fooled seems very low.

AI Angst

The odds are not low. They are, in fact, extraordinarily high. This is exactly the kind of psychological hazard – lot to gain, subjective experiences, observations free of the context of its impact on other parts of the organisation or society – that might as well be tailor-made to trick developers who are simultaneously overwhelmingly convinced of their own intelligence and completely unaware of their own biases and limitations.

Many intelligent and talented people believe in homeopathy, psychics, or naturopathy. It isn’t a question of their integrity as they are sincere in their beliefs. Nor is it a question of their intelligence as subjective validation is not dependent on a lack of intelligence. Education will only protect you insofar as some education – none of which engineers or software developers will ever encounter – teaches you to not subject yourself to situations where your own biases and subjective experiences can fool you.

This is not a low probability risk. It’s an extremely high probability risk.

And make a note of who is publishing the blog post. Fly is an hosting company so if the dual argument of “AI lets us go faster” and “AI makes mediocre code but we’re slowing down to make it usable” (paraphrased, but that’s essentially one of the arguments made in the post) is their self-contradictory policy, that puts a question mark on hosting with them.

The fundamental argument of the pro-AI crowd in coding is that it makes you go faster and, after you’ve gone faster, you can go slower a little bit to catch the errors caused by going faster. They back this with personal anecdotes that are largely subjective validation of what they want to be true.

The problem, though, with responding to blog posts like that, as I did here (unfortunately), is that they aren’t made to debate or arrive at a truth, but to reinforce belief. The author is simultaneously putting himself on the record as having hardline opinions and putting himself in the position of having to defend them. Both are very effective at reinforcing those beliefs.

A very useful question to ask yourself when reading anything (fiction, non-fiction, blogs, books, whatever) is “what does the author want to believe is true?”

Because a lot of writing is just as much about the author convincing themselves as it is about them addressing the reader.

The only sensible action to take – which was also one of the core recommendations I made in my book The Intelligence Illusion – is to hold off. Adoption of “AI” during a bubble, without extensive study as to the large-scale impact of adoption, is the cognitive, productive, and creative equivalent to adopting a new wonder drug at scale without testing for side effects, complications, or adverse drug interactions.

It’s outright irresponsible.

Again, most who say we need “better AI critique” are either not paying attention to the actual critics or simply do not like being told something they like has flaws that could be catastrophic is adopted at scale.

They inherently don’t accept any critique as valid, otherwise they’d engage with existing critique – such as that of Emily Bender and Alex Hanna – to find points they can act on.

There is no winning in a debate with somebody who is deliberately not paying attention.

Moreover, there is a simple test, a dividing line that explains the schism among developers on the usefulness of “AI” tools for software development and it has nothing to do with “AI” itself.

It’s a question of differing worldviews about what the state of software development was before the “AI” bubble.

Before LLMs arrived, the critics believed that existing software dev was flawed, largely inadequate, and a recipe for future crises, whereas the fans thought things were great but needed to be faster.

The LLM tools are all geared towards making existing – highly flawed – approaches to software development go faster, without addressing any of the fundamental issues that have caused recurring software crises over the years.

On that principle alone, even if you believed in the usefulness of LLMs in general, their application to software development is obviously and inherently a bad idea.

Making a broken thing go faster only makes it worse.

These two groups never agreed before chatbots and copilots became a thing and they are not going to agree today.

Those are two fundamentally different worldviews that will never be compatible.

That’s even before we get into the issues with how rampant cognitive hazards and the mechanics of manipulation are among “AI” tools and within the field of “AI” research.

The beliefs these two groups hold differed before the advent of LLM coding tools and aren’t likely to converge now, no matter how well reasoned the argument is.

We are also being let down by the epidemiology of our beliefs #

What “AI”, homeopathy, naturopathy, and psychic cons have in common isn’t just that they tap into a number of biases and “effects” that the human mind is vulnerable to.

They all belong to larger, more complex fields into topics and phenomena that even expert practitioners often only half-understand.

We still have a lot to learn about the human body, especially the human brain.

We only understand a fraction of what nature has to offer in terms of medicine and biology.

And even “AI” academics regularly talk about how they don’t fully understand how many of their larger models work.

These are perfect conditions for the spread of superstitious beliefs.

As Dan Sperber wrote in Explaining Culture in his attempt to explain the epidemiology of belief:

Half-understood or mysterious reflective beliefs are much more frequent and culturally important than scientific ones. Because they are only half-understood and therefore open to reinterpretation, their consistency or inconsistency with other beliefs, intuitive or reflective, is never self-evident, and does not provide a robust criterion for acceptance or rejection. Their content, because of its indeterminacy, cannot be sufficiently evidenced or argued for to warrant their rational acceptance. But that does not make these beliefs irrational: they are rationally held if there are rational grounds to trust the source of the belief (e.g. the parent, the teacher or the scientist). (p. 91)

This is why Tim Bray wasn’t being irrational when he cited the faith of his peers as a reason for his belief. This is how beliefs work. We rely on our trust of the sources to complete the loop from half-understanding to belief.

And because these ideas are only half-understood and vague, we can fit them in with our other ideas without problems or conflicts. There’s always a way for a developer, for example, to explain away conflicting details or odd events. The vagueness creates room to accommodate contradiction and furthers belief.

It’s no surprise that an almost religious faith in “AI” spreads fast among those open to the idea.

It’s not irrational to belief any of these things and trust your peers, but it is a mistake.

In our current environment – destruction of science funding, a massive “AI” bubble, the normalisation of gossip and hearsay as “facts” in software development – trusting the claims made by those advocating the adoption of “AI” is outright a mistake.

This is specifically the kind of large scale technology that needs thorough scientific testing because, on a micro-level, it might as well be purpose-designed to fool our judgement.

Read the whole story
PaulPritchard
120 days ago
reply
Belgium
Share this story
Delete
Next Page of Stories