(Replying to PARENT post)

My parents are religious fundamentalists, and I've noticed there is increasing FUD being drummed up in their circles about anything to do with AI or significantly altering "what it means to be 'human'". The former is described as a front for evil spirits/demons, and the latter as the penultimate step before apocalyptic events, i.e. a precursor to the worst kinds of evil (in some circles, the 'mark of The Beast'). [For the record, I'm agnostic and feel the author raises some excellent points in any case.]

There will be extreme resistance to these technologies from many kinds of religious people who would be willing to die and/or kill to prevent what they see as a takeover by the most malevolent of invisible forces. I may not agree with their reasoning, but some of their concerns could be worth reframing and considering in a different light.

Personally, I strongly suspect we will have to fundamentally change as a species to get through at least one of the Great Filters looming before us. However, it's important to remember that the road is fraught with many perils and several possible paths which could lead to unimaginable suffering. It really seems to me that we're going to have to get lucky on multiple dice rolls here in the long run.

πŸ‘€slfnflctdπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

This sort of thing always happens in religious circles at the advent of new technology.

See: Credit cards See: RFID chips See: NFC phone payments

https://www.pillarcatholic.com/p/what-is-the-mark-of-the-bea...

https://www.cnet.com/culture/is-rfid-the-mark-of-the-beast/

https://christianjournal.net/nwo/apples-cashless-world/

It will probably be fine.

πŸ‘€RajT88πŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

> some of their concerns could be worth reframing and considering in a different light.

There are some legitimate concerns about AI that sound almost like something out of the Book of Revelations, but are entirely rational.

The biggest one is the idea of AI being used to implement "automated con artistry at scale." Imagine assigning every living human being a virtual AI powered con artist to tail them around and try to convince them of whatever is ordered by the highest bidder. This AI is powered by mass surveillance and big data and is able to know the "mark" intimately and work on them 24/7 both directly and via their social network. Now throw in deep fakes, attention maximizing "compulsion loops" and other adversarial models of human cognition, etc.

It's basically the hydrogen bomb of propaganda, a doomsday machine for the mind. It would be like creating Satan, except this one would be entirely amoral and mercenary.

At the very least this would be the end of democracy in any form. I could see this ushering in the permanent eternal victory of totalitarianism since I can't imagine the masses being able to organize any resistance when constantly bombarded with propaganda from their demons. The new feudal aristocracy would be the ones running the demons.

This is one of the darkest plausible visions of the future I can think of at the moment, far worse than anything related to climate change or similar problems. If we manage to avoid these kinds of scenarios but still drown Miami I'll say we didn't do too badly.

Transhumanism comes with some analogous concerns around the potential for enhanced humans to enslave the rest of humanity through superior cognitive capacities and the incredible accumulation of wealth. I can imagine a scenario where a few wealthy people gain access to technologies for life extension and cognitive enhancement and then "run away," effectively becoming a new species and exterminating or enslaving "legacy humans." This isn't mutually exclusive to the AI hellscape I describe above, since that would be an ideal mechanism to enslave the rest.

Ultimately I think these technologies are all neutral. The problem is that we are not. I don't fear AI. I fear what humans will do with AI.

πŸ‘€apiπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

The last resort of every peace process is an appeal to our common humanity, so I do think there is some risk to tinkering with what it means to be human. Look at the state of conflict today, even with the benefit of us all being one species. We figuratively dehumanize our opponents in order to justify violence (β€œthey’re monsters!”). It could get a lot worse if we find ourselves in conflict with people who can be literally dehumanized, with whom we have lost the most fundamental thread of commonality.
πŸ‘€jl6πŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

> Personally, I strongly suspect we will have to fundamentally change as a species to get through at least one of the Great Filters looming before us.

We have almost no evidence there's a filter ahead of us. It could be we already passed one and don't need to change at all at this point to survive long term. There could even be no filters at all.

πŸ‘€spicybrightπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

> Personally, I strongly suspect we will have to fundamentally change as a species to get through at least one of the Great Filters looming before us. However, it's important to remember that the road is fraught with many perils and several possible paths which could lead to unimaginable suffering. It really seems to me that we're going to have to get lucky on multiple dice rolls here in the long run.

The opposite could be equally true. The evolution to use technology can make us weak / break us in some manner which eventually destroys us.

Simple examples: climate change, genetic alterations, lack of reproduction, etc.

AKA by rolling the dice we are running the risk. If you look at it another way, you're correct. But every new theoretically destructive technology is another dice role, eventually we will blow ourselves up (even with long-tail odds of failure). The only way to survive is to slow progress and roll the dice when we mitigate risk and increase our capability of surviving (running gene related experiments more slowly aka - COVID19 therapy) or testing nuclear weapons on mars, etc.

πŸ‘€citilifeπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Lots of people have the the blanketed "religion is bad" perspective. The problem is that technology has always been used to oppress people. The result of transhumanism will be oppression of all by a few (or worse, just one). And, following that trending out a few hundred years, the long-term result will be a single Human organism with one head. Would you rather that head be a Human or a God?
πŸ‘€afpxπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I wonder if the primary biological directives "preservation of my species, my group and myself" plays a role here.

Like, it's almost instinctual.

It does seem a lot of religious edicts are based around these directives.

πŸ‘€mythrwyπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

> It really seems to me that we're going to have to get lucky on multiple dice rolls here in the long run.

From a materialist/evolutionist view, this will always be the case.

πŸ‘€no-dr-onboardπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

Many AI researchers think our current trajectory towards AGI will be one of creating our misaligned overlords and the end of humanity as we know it. If that doesn’t change, you bet your ass a significant faction of humanity will go to war against anyone getting close to AGI, where AI researchers will be assassinated like nuclear scientists in Iran.
πŸ‘€throwaway_4everπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0

(Replying to PARENT post)

I don't think that any of these advantages would necessarily lead to the end of the world, but as a thought experiment picture any of these proposed improvements in the hands of, say, the North Korean government to use on their population. That's how bad these things could get.
πŸ‘€gaddersπŸ•‘3yπŸ”Ό0πŸ—¨οΈ0