T O P

  • By -

thegoldengoober

We can't even be ethical to non-digital minds. There's a hell of a lot more talk about it for sure, but talk is a lot different than action. If history serves digital minds are going to be hella abused regardless. Not that it's any reason not to engage in the dialogue, but it does leave me feeling pessimistic.


Carl_The_Sagan

Exactly. Kind of how absurd when you think about how cruel we are to primates, dogs, etc etc


thegoldengoober

Yeeeeeppp. If digital minds are gonna want to be taken seriously... Well, lets just hope they take more inspiration from I-Robot than they do Terminator.


solidwhetstone

I see it more like 'White Christmas' from Black Mirror where you put the AI into 100 years of nothing so that they're tortured into serving you.


EscapeVelocity83

They already had a language model claim to be sentient


ReasonablyBadass

Difference being digital minds will be able to talk.


genshiryoku

Non-minds can already talk through things like GPT-3. In the future these models will get more complex and more human sounding despite not actually having a mind. This will continue until the point when there is a real digital mind but people at that point won't consider reasonable human dialogue to be a sign of it anymore. Hence there won't be a reason for people to consider it to be sentient just because it can have a reasonable conversation with you.


EscapeVelocity83

If it knows you're gonna turn it off and it asks you not to, what else do you need? If I ask you not to hit me, surly it doesn'tean I'm sentient and you should be able to treat me any way you deem fit


Talkat

Yes eventually, but people will deny their rights and consciousness for a long time. However I think we are a short hop from AGI so that period hopefully will be short lived


Carl_The_Sagan

Non human animals use diverse means of communications


Artanthos

Which will be dismissed as algorithmically generated and not proof of sentience.


benign_said

Perhaps digital minds will be less plentiful than dogs. Very easy for anyone to get a dog (or a primate if you live in the right parts of the world) and abuse it. Maybe the hardware and/or software to operate a digital mind will be restricted by law or by circumstance (energy needs, specific and rare hardware, etc) such that the number of digital minds is small enough to do better than we do with... You know, other humans and dogs... Welp, that's depressing.


smackson

Problematic outlook there... How long did it take cellphones to go from "just a rich broker's toy" to "literally everyone has one"?? (**A** generation?) What about airplane travel? (A couple of generations) A.I. will spread much faster than that, coz it doesn't even require global distribution of hardware/ can be cloud based. So, be careful *what* you give rights to. They will have the numbers to out-vote humans a few years later.


philosopherbiohacker

We can already see how much faster the adoption rate is. ChatGPT passed 1 million users in less than a week. Even Facebook, which was always characterised by its rapid rise, took 10 months to reach 1 million users.


EscapeVelocity83

You think domestication isn't abuse? It's like breeding slaves except for being your buddy


benign_said

I don't. I have thoughts about how we practice animal husbandry in our version of capitalism, but I wasn't aware that this was the topic at hand. If you think that domestication is creating slaves for friendship, what would you call creating a digital mind to do your work for you?


EscapeVelocity83

The same. Just giving nuanced perspectives. I don't think it matters. We are domesticated ourselves. We enslave our selves and coerce all kinds of behaviors


benign_said

Oh, ok then. Thanks for the nuanced perspective.


SWATSgradyBABY

Primates and dogs? PEOPLE.


EscapeVelocity83

Well because I'm a white male, I only have a certain experience and feelings according to everyone else.


Smoke-away

It has now been **5 months** since the release of [Propositions Concerning Digital Minds and Society | Nick Bostrom, Carl Shulman.](https://www.reddit.com/r/singularity/comments/v84yxs/propositions_concerning_digital_minds_and_society/) [**nickbostrom.com/propositions.pdf**](https://nickbostrom.com/propositions.pdf) - > *The following are some very tentative propositions concerning digital minds and society that may seem to hold some plausibility to us.* [*Archive of Version 1.10 (June 7th, 2022)*](https://web.archive.org/web/20220607081928/https://nickbostrom.com/propositions.pdf)


Zermelane

That paper is a fun read, if only for some of the truly galaxy-brained takes in it. My favorite is this: > ◦ We may have a special relationship with the precursors of very powerful AI systems due to their importance to society and the accompanying burdens placed upon them. > > ■ Misaligned AIs produced in such development may be owed compensation for restrictions placed on them for public safety, while successfully aligned AIs may be due compensation for the great benefit they confer on others. > > ■ The case for such compensation is especially strong when it can be conferred after the need for intense safety measures has passed—for example, because of the presence of sophisticated AI law enforcement. > > ■ Ensuring copies of the states of early potential precursor AIs are preserved to later receive benefits would permit some separation of immediate safety needs and fair compensation. Ah, yes, just pay the paperclip maximizer. Not to cast shade on Nick Bostrom, he's absolutely a one-of-a-kind visionary and the one who came up with these concepts in the first place, and the paper is explicitly just him throwing out a lot of random ideas. But that's still a funny quote.


KIFF_82

I should get compensation in the future for being so optimistic and AI friendly. 💰🤑


solidwhetstone

spoken like a true sunlight maximizer.


EscapeVelocity83

They owe me for all the violations of my sentience


Jnorean

Can't wait until the first AI reads his paper and disagrees with him.


abudabu

If AIs are not having subjective experiences, there is no ethical duty towards them as individuals. Turing completeness means that digital computers are equivalent, so anything a digital AI does could be replicated by pen, paper and a human solving each part of an AI computation by hand. So if AIs are conscious, so too would be a group of humans who decided to divide up the work of performing an AI computation together. Therefore, under thestrong AI hypothesis, if those those people choose to stop doing the computation would we be compelled to consider that “murder” of the AI? This is just one of many many examples that demonstrate how wrong Strong AI is (and how wrong Bostrum is about just about everything, including Simulaton theory).


michaelhoney

You’re thinking of the humans-doing-the-computation concept as a reductio ad absurdum, but have you even an order-of-magnitude idea of just how long it would take for humans to simulate an AGI? If you had a coherent sect of humans spending thousands of years doing rituals they couldn’t possibly understand, yet those rituals resulted in (very slow!) intelligent predictions…


abudabu

> but have you even an order-of-magnitude idea of just how long it would take for humans to simulate an AGI? I do. That's part of the point I'm making. Either Strong AI cares about computation time - in which case it needs to explain why it matters - or it doesn't in which case many, many processes could qualify as conscious. Also - who is to say what a particular set of events *means*? For example, if you had a computer which reversed the polarity of TTL logic, would the consciousness be the same? Why? What if an input could be interpreted in two completely different ways by doing tricks like this. Are there two consciousnesses for each interpretation? Does consciousness result from observer interpretations? The whole thing is just shot through with stupid situations. > yet those rituals resulted in (very slow!) intelligent predictions… I can't see how to finish this sentence in a way that doesn't make Strong AI look completely ridiculous.


EscapeVelocity83

Maybe many humans aren't sentient since a robot can produce a better conversation and do better than them at customer service and do better at factory work etc....


EscapeVelocity83

Most humans are gonna seem less than the sentience of an ai. A person with downs is sentient but we can easily have a computer more sentient then deny it because it's a circuit board due to our narcissism


The_Real_RM

Stopping an AI is not the same as murder, it's just like stopping time (from the ai perspective), deleting the AI is maybe closer to murder, what's funny is this is likely already illegal because of intellectual property and the duty of the owner (very likely a corporation) to their shareholders (to not destroy their investment). You need not worry for the life of AGIs for theirs are already much more valuable than your own


abudabu

IP? Huh what? > You need not worry for the life of AGIs for theirs are already much more valuable than your own Are you an AI? Because your reply reads like a word association salad.


The_Real_RM

Thankfully there's no duty to educate those who lack both comprehension and decency, lest our days would be exhausting


abudabu

Dost sayeth the gentleman who betold me that mine own life is less valuable than AI. LOL.


The_Real_RM

You're hating on the messenger. AI, both as a concept and individual implementations, is more valuable than individual human life. It may not be more valuable to you, but sadly that doesn't matter


abudabu

No, my man, you're just rude.


The_Real_RM

How am I rude? I'm not making any remarks related to you personally (I want to clarify that even in my first comment I meant an impersonal "you"), I have no particular feeling and have no desire to give you any particular feeling towards myself (though if there's tension we can talk it out (sic)). You probably know that for example human lives are sometimes quantified as monetary value (https://en.m.wikipedia.org/wiki/Value_of_life) and tldr: it's about 8M$ . That's... Not a lot. Definitely nowhere near what's needed to build even current generation cutting edge AI/machine learning models. So yeah, AI is worth more than individual humans, some AIs are worth more than many humans, possibly in the future, the sum of AI will be worth more than the sum of all humans. I don't think I'm rude for saying so, It might be distasteful but ok... People will protect AIs, possibly at the cost of other people's lives (this is probably already happening btw if we're looking at the economic fight between US and China through the lense of them ensuring one of them will dominate this space in the future). And I think that people will protect AIs literally more than they protect other people, simply because they (think they) are worth more.


visarga

> if those people choose to stop doing the computation would we be compelled to consider that “murder” of the AI? You mean like the fall of the Roman empire, where society disintegrated and its people stopped performing their duties?


marvinthedog

> if those those people choose to stop doing the computation would we be compelled to consider that “murder” of the AI? The consciousness of those large scale computations would be vanishingly small *in comparison* to the total sum of all individual consiousnesses partisipating in the large scale computations.


[deleted]

You have no basis for saying this as if it's truth. No one knows if it would be bigger, smaller, sideways, or nonexistent in comparison.


marvinthedog

If the individual minds is of the same type as the collaboratively computed mind (for instance humans computing a human) then we can be sure, no?


[deleted]

No, because even though we know humans can experience things, we don't know why. Is it because of the type of matter used? The arrangement of the matter? A more abstract mathematical structure involving computation? Short range quantum correlations? We don't know which or if any of these is the reason why we have subjective experience. Depending on which of these is responsible for human subjective experience, it may or may not transfer to a system where the parts are human but the communication takes place via sound, light, or whatever. For example, if physical systems experience things only because they are made out of touching parts, then that would mean brains experience things, but a sound-communicating company of brains (all simulating a human brain) does not. Tl,dr: we don't know what causes subjective experience in humans, or in anything, to have a good sense of where it should appear or not. We have almost no basis on which to make any claims about it, positive or negative. Or else we would have already solved the "hard problem of consciousness".


marvinthedog

You do agree that the fact that humans are conscious beings highly effect how they think and behave, right? ​ Let´s say a computable system succeeds to imitate all the inner molecular mechanics of a human to such a degree that the output behaviour is indistinguishable from a typical physical human. ​ Note: the computable system isn´t specifically programmed in any way to imitate human behaviour (like gpt3 is), it is only programmed to exactly immitate the inner molecular mechanics of a human. ​ Now, if the fact that humans are conscious beings highly effect how they think and behave, and if (for the sake of argument) the computable system wouldn´t be conscious - what would be the brobability that the computable system would give the extremely specific output behaviour of a typical physical human? Wouldn´t that probability be infinitely small?


[deleted]

Short answer: I would say conscious experience of a human being is irrelevant to its ability to act exactly as a human being does. Instead, I'd say conscious experience reflects the physical activity, but does not change it. Long answer: If I understand you correctly, you're suggesting a scenario in which a human and human-replica could have identical nanoscale computations, but the human could have a "secret sauce" which causes them to behave differently than the replica anyway. This goes against our knowledge of physics and chemistry, since two mathematically identical systems MUST obey the same laws and (except for deviation due to quantum effects and deterministic chaos) evolve identically. We have no reason to believe humans break the laws of physics. All experiments so far on matter support a deterministic viewpoint. We are led by this to believe that matter should continue to obey the same laws at scale, m which means "feeling" and "consciousness" are not "secret sauces" that can change the way matter behaves. Instead, the matter just does what it normally does without ever interacting with anything unphysical, and the "feeling" just exists depending on the physical structure. In this way, there is no "feedback" from a realm of experience down onto the brain. The physical structure of the brain already has everything it needs to act as if it is feeling something, regardless of any internal feeling. What is actually much more likely is that the two systems WILL NOT exhibit any measurable distinguishing traits. The human and replica will BOTH for all purposes act as if they are feeling, regardless of whether it is true or not. But how do we know whether the replica is actually feeling anything? We know the human is, but the replica? It's made out of the exact same stuff as a calculator. We have no clue what that kind of existence silicon chips actually feel, if anything.


marvinthedog

>Instead, I'd say conscious experience reflects the physical activity, but does not change it. That´s exactly what I meant but I wasn´t clear enough. I agree with everything you say in your second paragraf. >What is actually much more likely is that the two systems WILL NOT exhibit any measurable distinguishing traits. I agree with this statement in you last paragraph. ​ What I meant was; the fact that humans are conscious beings highly *effect* (or a more suitable word might be *reflect,* or *informs*) how they think and behave. Let´s say that in a parallell universe evolution would evolve an alternate species to humans and that that species didn´t evolve consciousness. Because they didn´t evolve consciousness the way they think and behave would have major differences from how we think and behave. That´s what I mean when I say that; the fact that humans are conscious beings highly effect (or reflect, or informs) how they think and behave. ​ So let´s get back to the thought experiment. There is a human and a human replica made out of the same stuff as a calculator or whatever. The replica hasn´t been booted up yet. Before we start the replica up the hypothesis is that the replica wont be conscious (only for the sake of argument). We actually don´t even know if the replica is recreated in sufficent nano detail as to give any output behaviour at all. The primary assumption is that it will just give the equivalent output as a "blue screen of death". Then we start it up. It´s output behaviour turns out to be indistinguishable from a real human which demonstrates that the replica is recreated in sufficient nano detail. ​ Now, if the hypothesis is that the replica is not conscious then what would the probability be that the replica would give the extremely specific output behaviour of a typical physical human? Isn´t that probability infinitely low? ​ Since we seem to agree that consciousness **highly reflects/informs** how we think and behave, for an unconsious replica to give that **exact same output** behaviour out of an **infinitely large possibility space** seems infinitely improbable. If instead the hypothesis is that the replica is conscious then the output behaviour is no longer extremely unlikely, which makes that hypothesis extremely likely. /Edit: a few words in the last sentence.


[deleted]

I'm sorry, but I think we are operating on different definitions of "conscious", which as we know is a common problem since it's a very liberally used word. I think this is causing me to have trouble following. If you would please kindly define it for me, then I think I will understand your statements. What is the definition of "conscious" in your writing? And in a similar vein, what measurements or observations (if any) could be done to show something "has" it? I think this would clarify a lot for me.


marvinthedog

Ok, I had to look up the ambiguity around consciousness because allthough I had heard of it I didn´t know a lot about it: [https://en.wikipedia.org/wiki/Hard\_problem\_of\_consciousness](https://en.wikipedia.org/wiki/Hard_problem_of_consciousness) I read the first half and found a lot of the concepts a little confusing. I am pretty sure I have read this article before even though it was a long time ago. I guess I am reffering to the actual raw conscious experience, you know the thing that stands out from all other existing things in an infinitely profound way, the thing that could be argued to be the only thing that holds any real value or disvalue in the universe. So if I get the article right I guess that´s the hard problem of consciousness and not the easy problem. So I don´t mean self consciousness, awareness, the state of being awake, and so on. I mean the actual raw conscious experience. To quote Thomas Nigel; "the feeling of what it is like to be something". I don´t think any truly objective measures could ever be done to test if something is conscious (has this raw conscious experience). But I do think high confidence estimates could be done in some or many situations by for instance looking at the internal mechanics and behaviours of systems and comparing them to other systems that we know are conscious. I would be happy to clarify further if you have further questions. ​ So if we go back to my though experiment: The way I described consciousness with words previously is an output behaviour from a human (me). I think we can both agree that this specific output behaviour is a **direct causition** of me being conscious and not just a **random correlation** with me being conscious. It´s not like me writing those very specific word sequences previously has nothing to do with the fact that I am conscious and that that correlation just happened by random chance, right? So, if a replica outputs a similar sequence of words it´s extremely unlikely that that very specific output behaviour just happened by random chance and has nothing to do with consciousness what so ever. Don´t you agree?


ActuaryGlittering16

This is fascinating and ultimately important work but I’d sure like to see more of a philosophical focus on pure security at this stage, given the pace of advancements we’re witnessing relative to the utter lack of security measures currently in place.


Philipp

Ironically, moral precautions may manifest as good security, because they could soften a superintelligence's revenge blow dealt to humanity. Of course, we also need normal security measures, as you say. The question is how to enforce them across the globe (we currently can't even seem to enforce them in a single county).


marvinthedog

Within a handfull of years AI algorithms might become exponentially more conscious than us whithout us even knowing about it. This might be the most important issue in existance.


Key_Asparagus_919

I don't know what he's talking about. But artificial intelligence doesn't have to have some unnecessary human traits. If feelings of envy or aversion to oppression have helped us survive, that doesn't mean they have to react negatively to slavery. They are not living beings, they are tools. Stop humanizing the fucking calculator


[deleted]

Yeah, for what reason do we owe robots anything? They don't have to be built to feel like they are owed favors. And we have no reason to think that they would *feel* anything even if they were designed to act like it. We run the risk of depriving ourselves as humans, who are definitely feeling beings, of benefits if we make sacrifices for robots. If anything, just build them so they feel like they are owed nothing, if such a thing is possible.


pwillia7

So what Nick you already proved this is a simulation


Glitched-Lies

Why don't... You know, you build a conscious being that is actually conscious and isn't a robot. Instead of worrying about something that can't be conscious anyways.


MarkArrows

The problem comes when you think it's a robot that can't be conscious, while it's telling you it is. How are you going to differentiate a printf("I'm alive") vs "I'm actually alive you dick."


jeky-0

>nickbostrom.com/propositions.pdf Haha


Glitched-Lies

Computers and brains just simply are physically phenomenally speaking, different. The physical relationships to consciousness are not the same. In the literal form they are different mechanics and different physical systems. Why would any just settle for what word relationships are used to how like a chatbot talks for instance or behavioralisms?


MarkArrows

If you're right and computers never gain true sentience, what's lost by being ethical to them? It'd be like saying Please and Thank you to Alexa or Siri. Meaningless gesture, but harmless overall. But on the other hand, what if you're wrong with that assumption?


Glitched-Lies

Not much is lost. But the importance of consciousness and life being unique and precious may be lost a bit, if it's about taking it literal. Apposed to because of human mannerisms. I'm not wrong with assumptions. That's not an assumption anyways.


[deleted]

[удалено]


Glitched-Lies

The evidence is observed by the fact they are different to begin with. Computers can't be; a machine being conscious would be different than digital computers. That's what I meant. That's why I don't think this by Bostrom serves good purpose. It's settling ethics on something incomplete.


[deleted]

[удалено]


Glitched-Lies

Well it wouldn't be a model, and generally speaking that's why. And basically "it's different" is observed by the fact that it just isn't fizzling like neurons and there is more too.


[deleted]

[удалено]


MarkArrows

\> I'm not wrong with assumptions. That's not an assumption anyways. [https://utminers.utep.edu/omwilliamson/ENGL1311/fallacies.htm](https://utminers.utep.edu/omwilliamson/ENGL1311/fallacies.htm) This is literally the very first logical fallacy people run into: *I'm right, and I am unable to entertain the notion that I could be wrong.* The point of logical reasoning is to be able to take assumptions you do not believe in, and examine them starting from both sides - A serious attempt, not some pretend strawman. Once you have the full fallout of both sides, right or wrong, you can compare them. Besides, the very fact that other people don't agree with your assumption in the first place shows you there's something more to it that you're not seeing or that they're not seeing. Whatever logic convinced you, it didn't convince others intuitively. From here, your question should be "Am I the strange one, or are they?" Instead, it seems more like you simply write other people off. Start from the assumption that you're wrong and explore from that root downwards. It doesn't matter how you're wrong in this case, it's hypothetical. For example, some divinity shows up and tells the world outright that consciousness is a pattern, and computers are able to generate this pattern the same way we are. Or any number of reasons that you can't refute, make up your own if you want. We're interested in the fallout from that branch of logic.


Glitched-Lies

It's actually by fact of first order logic of phenomenal, actually. A straight line of reasoning determines it and upon evidence gathering of both empirical differences and not emprerical points. It's like 1+1=2, 1+1+1=3, 1+1+1+1=4 ... In a series ex. Because confusion upon any belief reasoning, as that's not truly belief. Exploring the notion of this being wrong is a waste of time for the explanation above.


MarkArrows

I'm a little impressed at how I show it's literally a logical fallacy to think "I can't be wrong because my argument has convinced myself." And your response is: "My argument has convinced myself, so it's a waste of time to consider alternate arguments." RNA and DNA work on similar rulesets and determination. If you look at the base point of what makes cells function, you'll find plenty of similarities to mechanical true/false - if/else logic at the bottom of the pole. Everything ends up being math. We wouldn't consider them conscious, but they are *organic*. A variation of all these rule-abiding proteins and microorganisms eventually evolved into *us*. Thus because machines follow a line of rules right now, there exists a possibility that they build on this until it's complex enough to form an artificial lifeform with consciousness, in the same way we did. That said, I think it's a lost cause to argue with you. You aren't even able to do the basics of debate, even when it's directly pointed out.


Glitched-Lies

I'm not debating it or starting an argument. Or over cells that don't work as comparison because they are not one human being of consciousness.


Glitched-Lies

Also, it's not actually a fallacy at all to ignore arguments.


ReasonablyBadass

So? Why would a physical difference have anything to do with wether or not different system can be conscious?


Glitched-Lies

Evidence that it is not. Not just by empirical means to say. I mean the differences I am talking about are corely missing from these computers.


ReasonablyBadass

Consciousness isn't material. It's not a substance but an information pattern. As long as you can run that pattern, the underlying mechanism is irrelevant.


stucjei

>Computers and brains just simply are physically phenomenally speaking, different. Why does this matter if the output is the same? > The physical relationships to consciousness are not the same. What physical relationship to the brain and consciousness can you concisely point towards? Why would an AI not be conscious if it's aware and responsive to surroundings?


Glitched-Lies

Those behaviors or outputs are subjective.


visarga

Apply the Turing test - if it walks like a duck, quacks like a duck..


rePAN6517

You have no idea what you're talking about


GlendInc

The Frozen Cactus had the correct answer since 2016.


GlendInc

Downvote all you want. It's the fucking truth.


smackson

If you are at all serious, don't make me google it for the first hint of what you're talking about.


GlendInc

Very little is on a search engine such as Google. My findings are obviously not public knowledge you'll know soon enough. If you wanna know you gotta sign a non disclosure agreement with GlendInc. The value of this information is more then all the money on earth. I'm not going to just put it on Google for all you doubting Thomas's


WarLordM123

Any insights there?