AI Is Dehumanization Technology

by smartmicon 6/26/25, 7:41 PMwith 172 comments
by perching_aixon 6/26/25, 8:28 PM

> Rather than enhancing our human qualities, these systems degrade our social relations, and undermine our capacity for empathy and care.

I don't genuinely expect the author of a blogpost who titles their writing "AI is Dehumanization Technology" to be particularly receptive of a counterargument, but hear me out.

I can think of few things as intellectually miserable as the #help channels of the many open source projects on Discord, for example. Should I wager a guess, if these projects all integrated LLMs into their chatbots for just a few bucks and let them take on the brunt of the interactions, all participants on either side would be left with much more capacity to maintain and express empathy and care, or to nurture social connections. This extends beyond these noncommercial contexts of course, to stuff like customer service for example.

by rolha-capoeiraon 6/26/25, 8:10 PM

This presupposes that human value only exists in the things current AI tech can replace—pattern recognition/creation. I'd wager the same argument was made when hand-crafted things were being replaced with industrialized products.

I'm not saying those things aren't valuable, or that humans can't express social and spiritual value in those ways, but that human value doesn't only exist there. And so, to give AI the power of complete dehumanization is to reduce humans to just pattern followers. I don't believe that is the case.

by kelseyfrogon 6/26/25, 8:47 PM

Whether we like it or not, AI sits at the intersection of both Moravec's and Jevon's paradox. Just as more efficient engines lead to increased gas usage, as AI gets increasingly better at problems difficult for humans, we see even greater proliferation within that domain.

The reductio on this is the hollowing out of the hard-for-humans problem domain, leaving us to fight for the scraps of the easy-for-humans domain. At first glance this sounds like a win. Who wouldn't want something else to solve the hard problems? The big issue with this is easy-for-human problems are often dull, devoid of meaning, and low-wage. Paradoxically, the hardest problems have always been the ones that make work meaningful.

We stand at the crossroads where one path leads to an existence where with a poverty of meaning and although humans create and play by their own rules, we feel powerless to change it. What the hell are we doing?

by raframon 6/26/25, 8:35 PM

> For example, to create an LLM such as ChatGPT, you'd start with an enormous quantity of text, then do a lot of computationally-intense statistical analysis to map out which words and phrases are most likely to appear near to one another. Crunch the numbers long enough, and you end up with something similar to the next-word prediction tool in your phone's text messaging app, except that this tool can generate whole paragraphs of mostly plausible-sounding word salad.

This explanation might've been passable four years ago, but it's woefully out of date now. "Mostly plausible-sounding word salad"?

by mrcwinnon 6/26/25, 9:00 PM

Yes, before AI, society was doing fantastically well on "social relations, empathy, and care." XD

I remain an optimist. I believe AI can actually give us more time to care for people, because the computers will be able to do more themselves and between each other. Unproven thesis, but so is the case laid out in this article.

by tim333on 6/26/25, 9:22 PM

>The push to adopt AI is, at its core, a political project of dehumanization

I can't really say I've seen that. The article seems to be about adoption of AI in the Canadian public sector, not something I'm really familiar with as a Brit. The government here hopes to boost the economy with it and Hassabis at Deepmind hopes to advance science and cure diseases.

I think AI may well make the world more humane by dealing with a variety of our problems.

by old_man_catoon 6/26/25, 9:03 PM

Dehumanization might be the wrong word. It's certainly anti social technology, though, and that's bad enough.

by PolyBakeron 6/26/25, 9:29 PM

I think there is a fundamental non-understanding of power present in the post. By that I mean that the author doesn't appreciate that technology (or any tool for that matter) gives power and control to the user. This is used to further our understanding of the world with the intent of creating more technology (recursive process). The normies just support those at the forefront that actually change society. The argument in the post is fundamentally anti-technology. Follow this argument and you end up at a place where we live in caves rather than buildings.

Also the anti-technology stance is good for humanity since it fundamentally introduces opposition to progress and questions the norm, ultimately killing the weak/inefficient parts of progress.

by alganeton 6/26/25, 9:15 PM

I think AI could have been great.

It's just that greed took over, and it took over big time.

Several shitty decisions in a row: scaling it too much, stealing data, marketing it before it can deliver, government use. The list goes on and on.

This idea that there's something inherent about the technology that's dehumanizing is an old trick. The issue lies in whoever is making those shitty decisions, not the tech itself.

There's obviously a fog surrounding every single discussion about this stuff. Probably the outcome of another remarkably shitty decision by someone (so many people parroting marketing ideas, it's dumb).

We'll be ok as humans, trust me on this one. It's too big to not fail.

by tptacekon 6/26/25, 9:09 PM

The problem this author has is with the technology industry, not AI in particular, which really is just a (surprisingly powerful) cohering of forces tech has unleashed over the last 25 years.

by hayst4ckon 6/26/25, 9:51 PM

To call AI a dehumanization technology is like calling guns a murder technology.

There is obviously truth to that, but guns are also used for self defense and protecting your dignity. Guns are a technology, and technology can be used for good or evil. Guns have been used to colonially enslave people, but also been used to gain independence.

I disagree with the assessment that AI is intrinsically dehumanizing. AI is a tool, a very powerful tool, and because the very rich in America doesn't see the people they rule as humans of equal dignity, the technology itself betrays their feelings.

Attacking the technology is wrong, the problem is not the technology but that every company has a tyrant king at it's helm who answers to no one because they have purchased the regulators that might have bound their behavior, meaning that their are no consequences for a CEO/King of a company's misdeeds. So every company's king ends up using their company/fiefdom to further their own personal ambitions of power and nobody is there to stop them. If the technology is powerful, then failure to invest in it, while other even more oppressive regimes do invest in it, potentially gives them the ability to dominate you. Imagine you argue nuclear weapons are a bad technology, while your neighbor is busy developing them. Are you better off if your neighbor has nuclear weapons and you don't?

The argument that AI is a dehumanization technology is ultimately an anarchist argument. Anarchy's core belief is that no one should have power to dominate anyone else, which inevitably means that no one is able to provide consequences for anyone who ambitiously betrays that belief system. Reality does not work that way. The only way to provide consequences to a corrupt institution is an even more powerful institution based on collective bargaining (founded by the threat of consequences for failing to reach a compromise, such as striking). There is no way around realpolitik, you must confront pragmatic power relationships to have a cogent philosophy.

The author is mistaking AI for wealth disparity. Wealth is power and power is wealth, and when it is so concentrated, it puts bad actors above consequences and turns tools that could be used for the public good into tools of oppression.

We do not primarily have an AI problem, but a wealth concentration problem and this is one of many manifestation of it.

by dwaltripon 6/26/25, 9:42 PM

The meat of the post does not depend on the characterization AI as “mere statistical correlations” that produces “plausible sounding word salad”.

I encourage people to not get too hung up on that and look at the arguments about the effects on society and how we function as humans.

I have very mixed feelings about AI, and this blog hits some key notes for me. If I have time later I will try to highlight those.

by adamcon 6/26/25, 9:23 PM

I think it's an interesting piece, and calls us to consider how the technology will actually be used.

A lot of things that are possible enable evil purposes as or more readily than noble ones. (Palantir comes to mind.) I think we have an ethical obligation to be aware of that and try to steer to the light.

by gchamonliveon 6/26/25, 8:56 PM

Everything is dehumanization technology when society is organized to foster competition and narcissism and not cooperation and care.

Technology is always an extension of the ethos. It doesn't stand on its own, it needs and reflects the mindset of humans.

by drellybochellyon 6/26/25, 8:38 PM

It can be, but on the other hand its made me think in a radically different way about concepts in humanity.

by toleranceon 6/26/25, 9:15 PM

This article is informative, but I can't imagine it would do anything to spur the conscious of people who use AI in ways other than the harmful examples it illustrates.

by davesqueon 6/26/25, 9:29 PM

I'm not usually a fan of progressive politics, but I thought Bernie Sanders made a great point the other day when he simply asked why adoption of AI would lead to layoffs instead of a 4-day work week. I don't think the question is naive. There really doesn't seem to be any good reason why the value of AI technology couldn't be distributed this way today, only that the people in charge of it don't want to do that, because they are so accustomed to claiming value instead of sharing it.

by mulippyon 6/26/25, 9:13 PM

pedestrian rehash of standard ai critique talking points without novel insight. author conflates pattern recognition with "dehumanization" through definitional sleight-of-hand - classic motte-and-bailey where reasonable concerns about bias/labor displacement get weaponized into apocalyptic framing. the empathy-as-weakness musk quote does heavy lifting for entire thesis but represents single data point from notoriously unreliable narrator. building systematic critique around elon's joe rogan appearance is methodologically weak. technical description of llms as "word salad generators" betrays surface-level understanding. dismissing statistical pattern matching as inherently meaningless ignores that human cognition relies heavily on similar processes. the "no understanding" claim assumes consciousness/intentionality as prerequisite for useful output, which is philosophically naive. bias automation concerns valid but not uniquely ai-related - bureaucratic systems have always encoded societal prejudices. author ignores potential for ai to surface and quantify existing biases that human administrators would otherwise perpetuate invisibly. deskilling argument contradicts itself - simultaneously claims ai doesn't improve productivity while arguing it threatens jobs. if tools are genuinely useless, market forces would eliminate them. more likely: author conflates short-term adjustment costs with long-term displacement effects. "surveillance technology" characterization relies on guilt-by-association rather than technical analysis. any information processing system could theoretically enable surveillance - this includes spreadsheets, databases, filing cabinets. the public sector romanticism is revealing. framing government work as inherently altruistic ignores institutional incentives, regulatory capture, and bureaucratic self-preservation. "mission-oriented" workers can implement harmful policies with genuine conviction. strongest section addresses automation bias and human-in-the-loop failures, but author doesn't engage with literature on hybrid human-ai systems or institutional design solutions.

-claude w/ eigenbot absolute mode system setting

by npteljeson 6/26/25, 9:57 PM

>The push to adopt AI is, at its core, a political project of dehumanization

I agree with the general sentiment, but absolutely disagree with this claim. The push to adopt AI is a gold rush, not any coordinated thing. I think in the political arena they don't give a single f about how humanizing or dehumanizing a thing is, especially if it's this abstract as "AI" or whatever. Everyone out there is there to further their own limited scope goal, according to whatever idea they have on how to achieve that. AI entered the public consciousness and so, companies are now in a race to not get behind. Politicians do enter into the picture, but mostly as ones who enjoy the fruit of the AI effort, it being a good public distraction, and an effective tool in creating propaganda. But nowhere near is it a primary goal, nor does it nefariously further any underlying primary goal, such as dehumanizing the people. It's merely a tool, and a phenomenon with beneficial side effects.

by Lercon 6/26/25, 9:05 PM

AI has the capacity to deflect accountability. That must be addressed. That does not mean that the intent, goal, or even primary result is dehumanisation.

Address the concerns specifically, suggest solutions for those concerns.

I have made a submission to a government investigation highlighting the need for explicitly identifying when an AI makes a determination involving an individual, and the need for mechanisms that need to be in place for individuals to be aware when that has happened along with a method to challenge the determination of they feel it was incorrect.

I have seen a lot of blanket judgements vilifying an entire field of research and industry and all those who participate in it. It has become commonplace use the term techbros as a pejorative to declare people as others.

There is a word for behaviour like that. That is what dehumanisation is.

by cynicalsecurityon 6/26/25, 8:05 PM

This almost reads as AI is from devil.

by teekerton 6/26/25, 9:11 PM

Btw, Rutger Bregman also considers empathy an error. It’s a complex argument.

by cheevlyon 6/26/25, 8:09 PM

Down with data-driven decisions and probabilistic computing!!

by renewiltordon 6/26/25, 9:13 PM

Dear god, the endless reams of "woe is us" are worse than any LLM generated content.

> Elon Musk - whose xAI data centre is being powered by nearly three dozen on-site gas turbines that are poisoning the air of nearby majority-Black neighborhoods in Memphis - went on the Joe Rogan podcast

Christ, who even reads this stuff. This constant palavering is genuinely too much.

by stego-techon 6/26/25, 8:32 PM

The poster hits the nail on the head in the summary alone, but I’ll go a step further:

We have been duped for half a century into solving increasingly niche problems whose benefits accrue ever upward beyond our reach, and whose harms are forcibly distributed across an unwilling populace. On the whole, technology has done exponentially more harm (mass surveillance, psychological exploitation, automated weapons, pollution, contamination of data, destruction of natural resources, outsourcing, dehumanization) than good (medical technology, targeted therapies, knowledge exchanges, Wikipedia, global collaboration). Instead of focusing on the broader issues of survival staring us in the face, we have willingly ceded agency and sovereignty to a handful of unelected Capitalists who convinced us that this invention will somehow, finally, inevitably solve all our ills and enable a utopia.

Not one of the boosters of any prior modern “technological revolution” can point to societal good that outpaced the harms caused by their creation. Not NFTs, not cryptocurrency, and certainly not AI. Even Machine Learning has seen more harmful than helpful use, despite its genuine benefits to human society and technological progress, enabling surveillance advertising and the disappearance of dissidents instead of customized healthcare and efficient distribution of resources in real-time.

Yet whenever someone dares to point this out, we’re decried by proponents as Luddites - ignoring the fact the real plight of the Luddites wasn’t anti-technology, but anti-Capital. To call us Luddites derisively is analogous to admitting the indefensibility of your position: You’re acknowledging we are right to be angry for being harmed for the sake of Capital alone, but that you will do everything in your power to stop our cause. We aren’t saying we want technology to disappear and to revert to the dark ages, we’re demanding that technology benefits everyone more than it harms them. We demand it be inclusive rather than exclusive. It should amplify our good works and minimize societal harms.

AI in the current context is the societal equivalent of suicide. It robs us of the remaining, dwindling resources we have on yet another thin, hollow promise that this time, it will be different. Four years ago we literally had Crypto Miners lighting up Coal Power Plants while proclaiming cryptocurrency and NFTs will solve climate change somehow, and now AI companies are firing up fuel turbines and nuclear power plants while promising the same thing.

We need to stop obsessing over technical minutiae and showing blind faith in technology, and realize that these are all tools of varying utility. We have mounting evidence that AI is causing more harm than good now, and that there is no practicable roadmap where its benefits will outweigh its harms in the near term. For all this obsessing over share value and “progress”, we need to accept the gruesome reality that our talent, our intelligence, and our passion is being manipulated to harm the masses - and that we alone can decide to STOP. It’s about taking our heads out of the sand, objectively assessing the whole of the system and superstructure, and taking action to change things.

More fuzzy code and token prediction isn’t going to save our asses or make the world a better place. The only way to do that is to acknowledge our role in the harms we perpetuate and choosing to stop them, regardless of the harms to ourselves in the moment.

by RS-232on 6/26/25, 8:29 PM

Were water mills, spinning jennies, and printing presses dehumanizing too?

by wagwangon 6/26/25, 8:26 PM

This blog post basically reads as, AI doesn't always adhere to my leftist values.