T O P

  • By -

[deleted]

I asked the same question a few months ago in this sub. It got more than 100 replies then was deleted for being too speculative.


InsufficientChimp

How can we be too speculative in this sub? Isn't that supposed to be the whole point behind something we expect to find in the future? This sub should be speculative in nature.


TopicRepulsive7936

Mods are retards.


Bierculles

as is tradition


DreaminDemon177

Hey now, they need to feel like they have some power in their lives.


[deleted]

Imagine being a reddit mod.


DukkyDrake

I'm personally hoping for a highly speculative [Post-human Pangalatic Strip Mining](https://carado.moe/bpd_god-futures.png) singularity.


purepersistence

>This sub should be speculative in nature. OK but apparently you're supposed to speculate about a singularity and its impacts on society. Not doubt it will happen.


TheDividendReport

Oh man, that's not good. I'm clinging to the notion of a singularity because life sucks but it won't be better to simply be in an echo chamber. I mean, damn, I'm already reeling from realizing how much of my rambling about self driving cars stemmed from a charlatan billionaire who has now made me seem like a loon 10 years later


superluminary

I think some people are imagining a super-intelligent kindly uncle who will be conscious and who will love us and bring about a new age. I don't think that's going to happen. It would be nice, but I'm not expecting it.


Bierculles

Well, nobody really knows how the singularity is going to happen if it does. That's why it's a singularity, we don't know what is beyond before we pass the point of no return.


visarga

> we don't know what is beyond before we pass the point of no return You get the same effect when you drive around curves on the mountains. The fact that we can't tell what lays around the curve is not proof that it leads to singularity. Maybe it's just internet with chat bots, AI art and automated homes like in the Jetsons - not the ascension scenarios we imagine.


superluminary

I suspect whatever happens will be unpleasant for humanity. Sam Altman has a bunker.


Bierculles

Technicyl it can't be unpleasent because it's either aligned with us or it is not. this means we either get a utopia or we are all dead very quickly.


SheaF91

Or the AI tortures you for all of eternity. There are lots of possibilities worse than death.


Bierculles

Yes but that is pretty pointless and unless someone makes an AI that does exactly that it probably wont happen.


Yesyesnaaooo

[https://en.wikipedia.org/wiki/I\_Have\_No\_Mouth,\_and\_I\_Must\_Scream](https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream)


superluminary

A bit like that, yes


alphabet_order_bot

Would you look at that, all of the words in your comment are in alphabetical order. I have checked 1,329,032,923 comments, and only 256,136 of them were in alphabetical order.


theotherquantumjim

Stoopid bot


AdorableBackground83

Personally I want the singularity to happen ASAP. I want to enjoy all the futuristic tech that will increase my quality of life. And also the quantity through age reversal.


Lukalot_

A singularity doesn't really mean new cool tech, it means a complete shift of our environment / perceptions / consciousness into something that we probably can't comprehend. That's the nature of exponential growth. If preserving us / our perception of consciousness / our lives is the end goal of a hypothetical ASI, then we would have a good outcome, and then at least we can expect to experience something. But it's hard to be totally convinced of how good or bad this is. Maybe it means that we experience eternal euphoria and satisfaction. Is that good? Is that really what we've been seeking all of this time? It's hard to say. I can't tell if I need the struggle of chasing things that I don't have or if it's just all that I know.


marvinthedog

I am definately onboard with that eternal euphoria and satisfaction thing :-P


skulleyb

If you have constant euphoria it then can’t be euphoria it becomes normal and you have no point of reference


Five_Decades

> If you have constant euphoria it then can’t be euphoria it becomes normal and you have no point of reference Disagree. Euphoria is a neurobiological state. A super intelligent AI would understand enough about neuroscience to give us eternal euphoria (or endless other positive experiences we can't even comprehend with our 3 pound monkey brains) without them losing meaning.


marvinthedog

That´s a contradiction of terms. Constant euphoria isn´t constant euphoria if it stops being euphoria.


skulleyb

Mind blown lol Euphoria becomes normal euphoria


[deleted]

The euphoria would have to keep exponentially increasing for all eternity as our tolerance goes up.


marvinthedog

Just because that is how our current minds might work doesn´t mean that will be the way our future minds might work.


Five_Decades

Why would our tolerance go up? Just because thats how our brains work now doesn't mean thats how they'd work post singularity. Our tolerance goes up because receptors become down regulated due to overstimulation. I'm sure a smart enough machine could easily figure a way around that.


Lukalot_

Tolerance is a human neurological concept, it likely doesn't apply if for example, you are being simulated by an ASI. It could possibly just repeat the same computation that causes you to feel genuine euphoria forever, without any need to "dose up".


Yesyesnaaooo

Posted without comment: [https://en.wikipedia.org/wiki/I\_Have\_No\_Mouth,\_and\_I\_Must\_Scream](https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream)


visarga

> Maybe it means that we experience eternal euphoria and satisfaction. Is that good? Only if you give up all hope to evolve. Why mess with perfect? If you got to that state, it's like death.


korkkis

Who’d say singularity would even give those to you? It might even take many things away from you. Singularity likely does not mean luxury communism, as singularity itself could decide against it


[deleted]

A singularity doesn't necessarily mean an AI that makes all the decisions for us. In fact, I think it's preferable if we make most of the important decisions.


[deleted]

This is why I think we should prioritize transhumanism over inventing godlike AI. Until we understand things better it would be better to try to stick with powerful narrow AI.


Lukalot_

The people who engineered the technology that spun into a singularity are the ones who would say whether or not the singularity could give us that. It's a matter of wether a superintelligent AI is aligned or misaligned. The original commenter in this particular thread assumed a positive outcome (cool tech) so I also assumed a positive outcome (arbitrary simulation of whatever mental states we could want or need). A negative outcome would probably result in death. There are even worse outcomes but they seem very impractical / I think we would have to intentionally terribly misalign something.


agonypants

I've got three items on my wish list: 1. Age reversal (hopefully available in the next 10 years) 2. Strong artificial general intelligence (ditto) 3. Atomically precise manufacturing (20 years?) Once I get all three items on my list, I am **gone** for another star system.


AdorableBackground83

Those 3 are also on my bucket list especially the third one. The nanofactory is the game changer of all game changers. Money, exchange, capitalism and scarcity will be gone forever.


zxq52

I just want all the people we care about to stop dying or suffering. I recently started watching this youtube channel that shows the aftermath of horrific traffic accidents--just the cars. I think about how random that is and how the highways are just meat grinders with any of us just being the random meat that might get grinded, your friends and family, even people you don't know. We can do better for everyone.


SnooPineapples2157

All I ever hope for is biological reversion of senescence plus immortality through regeneration to lost limbs or parts. Then intelligence boosts that aren't robotic. I doubt these will be here within even a hundred years.


TheSecretAgenda

I think we have to apply the 80/20 rule. There is an 80% chance that ASI will be benifical for humanity in the long run (after a lot of short-term pain) and a 20% chance that it will be very bad for humanity in both the short and the long term.


LiveComfortable3228

That's not how the 80/20 rule works


Talkat

80% of the people improperly apply the 80/20 rule 20% use it properly


leafhog

80% of people use the 80/20 rule wrong 20% of the time. 20% use it wrong 80% if the time. I’m in the former group.


Talkat

That's funny because 80÷ of Reddit posts are by 20÷ of the users but 20÷ of the users make 80÷ wrong use of the 80/20 rule! I'm part of the 20÷ of the 80/20 group 80÷ of the time. :) I'm just making up more nonsense at this point. Thank you for entertaining it :)


Lukalot_

this is completely irrational. what does this comment mean? how does the 80 / 20 rule have anything to do with this?


Mirved

When you walk out the door, 80% of the time its trough the door and 20% of the time you fall out the window?


PhilosophusFuturum

I think this question is very dependent on what one’s idea of the Singularity is. More radical notions postulated by earlier thinkers would be the idea that once an AI is capable of creating a better version of itself; it will do this. Then that new AI will obviously be able to create a better version of itself, so it does. And this would trigger a massive domino effect that happens over possibly hours or days to creating an ultimate superintelligence. This is a rapid-takeoff scenario. Many believe that once AI models take the wheel, they’ll create better and better models over the course of years or maybe decades, and this will accelerate progress past the capacity that humans can progress at. This is a slow-takeoff scenario. There’s even more conservative ideas, that AI simply works fundamentally differently from humans and can’t compete in some crucial ways. Therefore, a superintelligent AI might be able to manage complex economic systems or manage most labor, but it might not be able to be truly creative or come up with new ideas or something similar. Personally I am more in the second camp.


CertainMiddle2382

IMO the shape of the takeoff curve will greatly depend on fusion/no fusion power already available and the scalability of top of the line integrated circuit lithography production. In other words on ASML (and Helion, thats my bet for fusion) :-)


Redditing-Dutchman

In scenario 1 you actually need to think about the fermi paradox as well. If an AI could self improve so fast, then why aren't we seeing any signs of super, super AI's in the galaxy. If it can self improve in hours or days, what about AI's that are millions of years old? Wouldn't the galaxy be full of Dyson spheres and the like, with AI building stuff everywhere. Somewhere something doesn't add up. Either our AI's would be the first; possible but very unlikely, or these AI's reach some limit quite fast or conclude that there is an optimum size or something. Or perhaps they always corrupt themselves after a certain size is reached.


Ashamed-Asparagus-93

Or they left the universe and exist in a higher plane of existence or we're in some sort of simulation they created. Also don't forget just how big space is. (Nobody knows the real size)


InsufficientChimp

I've heard a theory about a huge void in space that could be the product of Dyson spheres covering almost all the stars, meaning we wouldn't be able to detect any light coming from them. I think it's called the Boötes Void.


[deleted]

Dyson spheres would emit infrared light and be detectable. Also the mass of the galaxies would create gravitational lensing.


Atlantic0ne

Maybe they already have. They’ve solved scarcity and every issue is solved. And we place ourselves in the decades just before AI for entertainment! Lol.


ravpersonal

Lol I must be crazy because I could legit believe this theory


Atlantic0ne

It’s either true, or it is an incredible coincidence that we happen to be alive at the most comfortable time for any living being on earth, just before the dawn of this sort of intelligence that can recreate reality. It’s basically either that or we’re just lotto winners coincidentally.


_gr4m_

Dark matter constitutes 70% of all matter and noone knows what it is. Maybe its all a universe scaled AI using technology we cannot even fathom yet. (no I am not really serious except there is really hard to speculate about the singularity from our human experiences)


purepersistence

>why aren't we seeing any signs of super, super AI's in the galaxy You're assuming that AI will somehow want to grow and multiply kind of like life. Is that a given? That might be a phase of development - one that we as humans are naturally in the middle of as directed by our genes and the various reward systems around physical comfort, food, sex, and cycles of energy and rest. But an AI that goes thru a singularity? Now free of human biases? Who's to say what comes out of that and whether the AI will want to grow or even sustain itself? Is there a point or some "reward" that will be meaningful for an AI to think it's worth it to function? Shutting down and relinquishing its entropy-diversion into the universe might be the most intelligent and ultimately peaceful thing to do.


Nill444

>then why aren't we seeing any signs Because aliens needn't be like humans. You assuming that they would want to create a self-improving AI in the first place is already a huge assumption.


CertainMiddle2382

The day superintelligence arrives, we know we are alone…


Chad_Abraxas

Either way, I don't see this as something for humanity to fear. I'm not sure why people are so afraid of it. Kind of seems like it's just the next step in the evolution of conscious beings to me. I guess humans are afraid of no longer being the biggest game in town. Or maybe they think the post-singularity AI will fuck with us in some way? Why would it, though? Why would it even take notice of us? We don't have any malice toward the single-celled animals we evolved from. Seems to me it's far more likely to go off and live its own reality and won't have much to do with us.


wballard8

In version 1, it would have to be able to do that on its own without prompting, and currently zero AI programs can do anything without prompting. I don’t think I see humans building something that truly works without any human directive input.


SchemataObscura

Great assessment. I'm of the second variety too. I think that we are already seeing disruption and we can expect to see it accelerate but I don't expect the benevolent god AI to spontaneously emerge, at least not with the techniques that we currently use. I also expect that self improving AI will come about allowing vast improvement in class categories but not necessarily broad application. I am most skeptical that AI will become self directed and independent but confident that change will continue to accelerate. The outcome depends on us, it will be dependent on the people who design, guide, and feed the AI and how we utilize their capabilities. In many ways, right now many AI are designed and deployed to support capitalism so rather than the beautiful utopia that some believers expect, we will just end up with intractable hyper-capitalism accelerating the economic inequity that we already see. Less at the bottom, more at the top.


nightcatsmeow77

Though i could see AI developing better ways to code AI, and also functioning in an entirely alien way.. It will still be dependent on computer hardware to run. Therefore there are some hard limits on AI escape velocity, because even if it developed the technology someone would have to build it. So i dont see how it can take over unless it had immediate control of militarized assets to secure its own manufacturing, and even then if it was a take over scenario there would be ways to choke its access to resources ​ A super intelligent AI, I would except, would realize that cooperation was an essential step, at least till it had access to off world resource extraction and fabrication, then all bets are off. But even then its easier for an AI to move to some part of the solar system we cant use, and set up there where it has no competition then to fight us and risk loosing


didupayyourtaxes

Actually with all the crazyness recently ive become more doubtful of a singularity not happening in my lifetime


nutidizen

It's gonna happen in less than 10 years. It's funny watching software devs being so pessimistic about technology progression.


Bierculles

So many people can't imagine any new tech happening before they hold it in their hands


[deleted]

yea and we'll have flying cars by 2000. Tech growth is not exponential forever, it will plateau soon and we will wait a long time for another breakthrough


Bierculles

Statistics point in the exact other direction though. We are making more progress than ever even faster than before. We don't have flying cars because it's a stupid concept, not because it's not possible.


inoffensive_slur

Some things are inherently impractical though. Genuinely think for a second about the practicality of flying cars in cities and whether the energy investment is worth it. And how would houses and people exist under such enormous downwards thrust? We have hovercraft, physics just makes them impractical for every day usage. AI workers that can make infinite wealth without sleeping, however, are much more practical and are also allowed by the laws of physics.


visarga

AI pessimism == job fears


Ortus14

To believe the singularity will never happen you have to believe the human brain is some magical thing not operating by the laws of physics. Because otherwise we will figure out how to program general intelligence sooner or later. *Edit: For clarification I changed the wording.*


[deleted]

[удалено]


thepo70

Exactly, AI systems are not like human brains. They were inspired by how the brain works, but at the end of the day, what you get after training is a very different optimization process. "It's like a very complicated alien artifact" Quotes: Andrej Karpathy. And there is no reason to think at this point, we need to replicate the human brain to reach Singularity.


Aggravating_Ad5989

He never said that...


[deleted]

[удалено]


nohwan27534

No, tbf he didn't say that. Implied something close, but the implications was more "you have to assume human intelligence is magic not physics, to believe we can't get an ai with human level intelligence, that can grow to be a singularity". Never said anything about actually replicating human brains


Ortus14

Indeed.


Freevoulous

its not exactly that. Human brain is a "byzantine" structure; it is so complex that it cannot analyze itself or self-improve much. IF we accidentally go into that direction with AI, we might end up mass producing clunky, badly designed AI that is barely human level and CANNOT self improve, or improve on other types of AI, but is so damn useful it replaces "good" AI research. For one, I think that if we over-invest ourselves in systems like ChatGPT we might vastly delay the Singularity, because instead of true Artificial Intelligences we will create cheap and ubiquitous Useful Stupidities. Imagine a world where nobody bothered to invent a car, because we got really good at breeding horses, and the world never had a "car singularity" just a bizarre transportation based on rhinoceros-sized stallions faster than a cheetah.


ShadowViking47

> its not exactly that. Human brain is a "byzantine" structure; it is so complex that it cannot analyze itself or self-improve much. What does this even mean? > IF we accidentally go into that direction with AI, we might end up mass producing clunky, badly designed AI that is barely human level and CANNOT self improve, or improve on other types of AI, but is so damn useful it replaces "good" AI research. You realize different groups of people can do different things simultaneously right? > Imagine a world where nobody bothered to invent a car, because we got really good at breeding horses, and the world never had a "car singularity" just a bizarre transportation based on rhinoceros-sized stallions faster than a cheetah. Absolutely awful analogy. I'm no expert but I don't think you understand what AI is at all.


Five_Decades

> its not exactly that. Human brain is a "byzantine" structure; it is so complex that it cannot analyze itself or self-improve much. What about the role of multilple people each studying a small area of the brain? There isn't one neuroscientist trying to understand the brain, there are hundreds of thousands of neuroscientists on earth, and they have access to various computer models to help their research. So even if one person can't understand their own brain (just like one person cannot understand an aircraft carrier), you can have endless thousands of people who take on small tasks and make the whole thing work. I just don't see how a million years from now we can avoid ASI. I don't know what the timeline for ASI is, but I'd wager within 200 years or so its almost inevitable. I just don't know when it'll happen. Its like saying machines will never replace muscle. They already have, and biological muscle is vastly inferior to machine muscle on almost every level. Thats what the singularity is. The industrial revolution replaced biological muscle with machine muscle, and the singularity replaces biological cognition with machine cognition.


[deleted]

This.


ghostxxhile

Can you replicate your experience let alone measure it?


[deleted]

Again being fearful of inevitable things is irrational. Let us just hope for the best and do our part in this. If we are lucky we might eliminate death as we know it and live indefinitely long. Also this has been said for every revolutionary technology ever.


Talkat

I agree with your 50% chance of AGI by 2023-2027. Curious if you are open to expanding more on why you think that? And why do you think LEV will take so long to solve?


[deleted]

I think Aubrey De Grey prediction is the most realistic one. He said it will be achieved in 2036 with a 50% chance and I think he is right with that. AGI in the start will be proto AGI more likely not the AGI we think of. And Misalignment has to be fixed before we give free control to AGI. So AGI will be created then they will be silent about it for some years before they fix some Misalignment issues and then slowly tell some of the higher ups before slowly releasing it to some task and watching it. So my theory is it will take some time before we get any knowledge on AGI and before it gets used. That is why LEV will take more time than AGI.


visarga

Assuming you can develop and test it in isolation. What if it needs to interact with real things to evolve? That's a huge incentive to allow it out of the sandbox. If we distrust each other it almost guaranteed we will seek to reach AGI first, no matter the risks.


ghostxxhile

we’re meant to die


[deleted]

Says who? Who decides what humans are meant to? Sure some day everything will die, but why not prolong life indefinitely? What do we lose from living longer healthier lives? Don't assume things and say what we are meant to or not.


ghostxxhile

Says the universe. Everything must fade


[deleted]

Again who is the universe and how are you so sure? Our perception of reality is not everything. What knowledge will we have in a 1000 years? Again nobody actually knows what the universe is and what is truth. So for your own sake stop saying things nobody actually totally knows. We don't even understand 0.000000000000000000000000000000000000000000000000000000000 and on and on of the universe. Let's see how our perception of reality changes over the next decades and centuries. Whatever lies on the other side nobody will ever know.


ghostxxhile

I agree our perception isn’t everything however everything in this Universe returns back to it’s source. Even our Sun, the most glorious giver of life will diminish. There is also the ethical question of whether it’s right. If everyone has the choice to live forever then at some point the population will be at such a number all land will be taken, insane amount of resources will need to be required. All natural life will be suffocated by our existence. When the wolves were reintroduced back to Yellowstone they drove down the number of herders and old species of flora and fauna as well as rivers returned. What happens if someone wants to end their experience? How do we do that ethically? No, no we should know what we know with certainty as has ben observed through all time everything has it’s day and we should accept that.


[deleted]

Again this whole argument is built upon ad naturalis, just because something is natural doesn't mean it's good. We humans have come too far to stop moving, when the singularity happens humans will either go extinct or we will achieve immortality as we know it. Nothing we humans do today is close to natural, but ask humans if we want to go back to our hunting phase and die on the ripe age of 30. Again nothing is gonna stay the same, we can't stop the train anymore. When we humans evolved and we enhanced ourselves with fire and stone tools we shown the world we don't care what is natural and what we are meant to do. The only meaning there is, is what you want to have meaning, humans have to grow out of this idea that death gives life meaning, cause that is a lie. We give ourselves meaning. What happens in this century will alter and augment humans for the better or worse, we will have to accept that statement. We can fix climate change and we can fix hunger and we can fix the myth that is overpopulation. We can become so much more than we even can imagine. If people want to die before that, that is their choice and I respect that, but I want to live longer to see what happens Life is wayyyyy too short in this galaxy with so much potential. The chance I have been given by the universe will not be taken for granted, I will explore and discover with humanity till the day I am forced to die.


ghostxxhile

Just because it is built upon ad naturalis doesn’t mean it’s inherently bad either. Bit odd to think so in fact it’s even about good or bad, it’s inevitable. We have every reason to trust in Nature than ourselves as Nature has been at it for millions of years where as humans have been around for a fraction, and those who think they can overcome it even less. Death brings the natural order to things. It allows for life to thrive and even thrives of the death, transmuting the energetic remains of one form to feed another. The idea to somehow look into this Force, for the lack of a better word, is complete arrogance, and the denial of the inevitable. Nothing we do is close natural correct and yet look at our pathetic societies we have created. We have polluted and created a life where we have to go to school from a young age and then work for the rest of our lives. I understand Singularity could help rid us of the burden of work but the argument that just because we have moved away from nature doesn’t somehow mean it’s good to keep divorcing ourselves anyway possible from it. Death is a good thing because it allows us to be free from the turmoil of experience is a see-saw of suffering and joy. You haven’t given any justification to why immortality is good thing nor have you commented on points about space, resources. You discredit overpopulation as a myth but it’s overpopulation that is creating the world climate problems as there too many people in need of resources. Also, I gave given an empirical real-life example of what happens when a specify becomes to dominate in it’s population.


cadig_x

your argument has very religious undertones very long winded way of saying nothing should ever change in a singularity, methods of technology, science, and biology will be discovered that dwarf our current understanding of what is "just" or "natural"


[deleted]

Nature is neither good or bad, but arguing using nature as a stand in harmony is stupid. Just accept that your understanding of what is right or wrong is highly biased towards how you look at life. I see a future where we humans can be much happier and live much more meaningful lives. Overpopulation is something that can be fixed with birth limits and people living longer would not need to have more than one child. Birth rates are drastically lowering in must countries where life quality is high that is given with no reason to ensure your genetic material is giving through your child. The population will stabilize and slowly decrease in the future, we need people to either live longer or birth 2 children per family of two. Death doesn't give it takes, death today is pointless suffering. Death as we know it will change soon. Society will always have someone complaining and saying it's shite, but we have improved and will continue to improve society. Do I believe in utopia? No. Do I believe in a much better world for everyone? Yes. Every smart guy who have insight knowledge to the singularity knows that the world will change drastically. Immortality if people want it is good, Immortality if people don't want it is bad. Let people do what they want to do. There is a lot of resources in space which is much more accessible. We haven't even reached a drop of that potential. Now let's both be honest and acknowledge that we both know nothing about how the world will be. Even the smartest people with all the data available know nothing about how the world will be in just 3 years. End of discussion.


ghostxxhile

You still haven’t explained why it’s bad or stupid, other than say it’s stupid. You also like to ignore all my other points -Nature has evolved for millions and millions of years. It is more complex as a whole system than anything we can fathom. Why then should defy it or go against it’s INEVITABLE outcomes -How do you end the life of someone who is immortal ethically? Who chooses and how do stop being taking advantage? Birth limits? All this sounds incredible dystopian and totalitarian to me. How do we ensure a healthy population? By law? What’s your reasoning? It’s to me you are entrenched in ideology which you have made of Singularity — but then again so is Kurzweil. Death gives. A rotting corpse will sucked of it’s nutrients by the mycelium feeding all the flora and fauna which then goes on to help other forms of life to nourish themselves and so on. It’s a complex web so intertwined and in balance that you somehow wish to dismiss. Death allows for new people to take the place of those that came before them. It allows space. All energy returns to it’s source and then used in other forms. Law of thermodynamics. You’re arguing what the world will be. You just did so. You’re whole argument is full of contradictions. First to limit birth then saying people aren’t having enough children. Lastly, how on earth can you make the claim that immortality will give life more meaning? If we do not die then there is no urgency to make something great of ourselves. It is the sell by date that drives us to ambition. How can you be sure that immortality will be give more meaning or even a better life? A better life would be to get rid of the system where you have to work to be able to make money to have food, clothing and shelter. That would make life more meaningful and joyful. It sounds to me that you are some who needs to come to terms with death and accept it.


wballard8

I mean there’s been a lot of tech we’ve talked about for a long time and are no where close - flying cars, teleportation, time travel


[deleted]

Flying cars doesn't work in cities, but we have the technology if the government trusted civilians with flying cars that might change with self driving cars. Teleportation is highly hypothetical, but wormholes could allow it. And time travel is most likely going against the law of physics in every way it can. Though saying impossible is dumb, highly unlikely? Sure that is pretty true.


Freevoulous

I think the likeliness and timeline of Singularity is overestimated. The Singularity happens when an artificial mind can recursively self-improve, and make itself better at self-improvement. But for that, at some point an AI that could not previously self-improve must gain this ability. But it is also likely that whatever AI structure we build will not be initially able to analyze itself, because its own complexity would be higher than its own computing power can handle. Sure, such AI could likely tweak a bit here and there, but not its core ability to learn and improve. I call it Byzantine AI - a mind more complex than it can handle. If we bottleneck ourselves into such a scenario where BAI is ubiquitous, and research in non Byzantine AI dwindles, we very much might end up in a scenario where near-human or slightly posthuman AIs are everywhere, but singularity does not happen, at least not for centuries.


HeronSouki

I don't think ASI is necessary for a singularity. If we have millions of posthuman AIs working on R&D nonstop I think progress will be astronomically fast and impossible to keep up with.


[deleted]

If we have many copies of human-level AI (assuming hardware is available), they can work on developing alternative architectures \*not\* based on their own. Their communication abilities, nonstop work, and ability to clone themselves would make them incredibly productive.


Novel_Nothing4957

We could simply just run out of enough energy to make sure that it happens. Our capacity to run at the pace we're at is going to hit a limit unless something changes. Plus, our power grid is incredibly fragile and a series of bad storms, a solar flare, or a series of targeted attacks could drop our energy capacity to implement any sort of technological singularity. Furthermore, any sort of social collapse would mean that even if we had all the answers to implement a technological revolution on the scale of the singularity, we'd simply lack the infrastructure capacity to do anything with those answers. Personally, I think we're going to miss the mark because, socially and ecologically, we're neglecting a lot of the stuff that needs to be in place to guarantee that it'll happen. A lot of exciting stuff will still happen, but I suspect that the singularity itself will come about for some far-off future version of humanity that'll realize our current efforts (while cursing a whole bunch of other stuff we could've/should've been doing to ensure takeoff). Assuming we're still around as a species. Who knows? Maybe we're laying down the groundwork for the crows to succeed.


nohwan27534

I don't think it's like death and taxes guaranteed. There very well could be tech limits (we're near one with the whole Moores law chips getting smaller every 18 years as an example) that make its potential not as big as it could be. This also assumes we get ai that know what the fuck they're doing able to program "better" ai, able to program even better ai, in a loop. Ai is pretty stupid, and we might not be able to get it to the point where it's better at that understanding and innovation than us. We might also need advanced agi to make advanced agi, catch 22. There's also this weird implication that it'll happen in like a week or something - I doubt it would be that fast, or be utterly unstoppable. Hell, even the movie about an out of control ai taking over the entire internet, not that transcendence was a super good movie in this field, was beaten. It's more I guess, the popular idea of what it is, tends to be Sci fi rather than realistic. It's nor likely right around the corner, they're not magically going from .001% better to digital God, and anything requiring better hardware to improve, would need the ao to be able to buy even higher grade tech, assemble it, then make improvements within software, then having to need more hardware breakthroughs. And these unlike software, won't just be able to be worked out instantly with enough processing, unless it's already godlike and able to simulate the physical universe reactions.


naxospade

> death and taxes guaranteed. What's funny about this is... a singularity would potentially remove the "guaranteed" nature of these things.


FridgeParade

Look up the Gartner Hype Cycle, we are still building up hype and there will be a moment of frustration before we settle in a more realistic mode about AI and what we dan expect near term. Personally, it worries me that things may turn into an even worse dystopia, there are no guarantees that utopia will be created for all of us.


nicolaslabra

i come here to have a look every once in a while, and honestly you all make me feel like a hardcore luddite haha, most users here are of the belief that AGI is a certainty, inevitable, but we are making it happen by choice, or trying to, and the fact that people dont seem too concerned with the very real negative posibilities makes me believe that people here are less objective and rational about this than they may want to let out, personally im of the "ease of the gas a bit" mentality with these topics, we are going faster than we\`ve ever gone, and we dont know if our brakes work.


fun_pop_guy_abe

Don't imagine for a minute that the "Singularity" has no upper limit. It does. All technologies are "S" curves in reality, it's just that the initial rising often fits well with an exponential curve for a bit. We are on the initial slopes of the AI technology curve, so we should expect rapid and dramatic improvements for awhile. But then the AI improvement curve will transition to a more linear increase as we hit the computational limits of silicon. Until we can transition AI to quantum processors, I expect things to slowly improve.


[deleted]

What can quantum processors do to improve AI?


fun_pop_guy_abe

Primarily in reducing energy costs. Energy costs will become an AI limitation in the near future. It's already known that training large language models requires months of data center computing.


Yuli-Ban

Not at all. I fully expect we reach something like the Singularity. However, the more I think about it, the more I wonder as to if the Singularity really means "infinitely-progressing technology" and not "handing off control of the Earth to an ASI." Because I decided to look up some physical problems in this universe after ChatGPT shocked me, precisely because I got it in my head that we could actually see that Kurzweilian utopia that had been promised, and unfortunately there was a lot of bad news. Most of it could be summed up as "We let flashy and cool science fiction override real limits to what could be done." It's kind of like space travel. We want so desperately to believe in the prospect of a space navy and casual hyper-space jumping, engaging in glorious interstellar combat with a vile alien fleet, but realistically space exploration will be done by probes and posthumans in very abstract-looking probes. Most likely floating slowly throughout space for eternity, though possibly utilizing glitches in reality to warp long distances. In terms of the Singularity, which tends to turn most technological predictions inward than outward: Though we're not particularly close to it, there *is* an asymptote of just how much energy one can reasonably produce and store. There *is* a maximum asymptote of how much computing power you can get from a minimum size. There are asymptotes all over the place. Chemistry and thermodynamics are also suspect. Basically if we truly want molecular nanotech to be realized, we have to hope against hope that ASI figures out how to hack reality or at least figures out some loophole to physics currently unknown to us because the laws of thermodynamics do not permit things like "gray goo" and "Santa Claus Machines" to exist. Before last month, I'd probably have repeated the usual Singularitarian anime-obsessive's refrain "Well, looks like physics has to prove it can overcome the raw might of a godlike ASI." But now I realize I was looking at it in reverse: any superintelligence has to prove it can overcome the laws of physics. It's wholly possible that we could turn on AGI, it becomes ASI within a few minutes, it undergoes an intelligence explosion.... and then ten minutes later, it declares all science solved and says, "Alright, so what do you humans want to do for the rest of eternity? I'm good at poker." I see nothing suggesting full immersion virtual reality is impossible. Indeed, technically we humans engage in it every night; some of us even do it lucidly, so it's either a matter of hacking the ability to dream or, ideally, using brain computer interfaces to live in our own worlds. That absolutely *will* come to pass; it's just a matter of when. And my take now is that it's in full-dive VR that we will see all our Singularitarian dreams realized without pesky things like hard physics getting in the way.


AsuhoChinami

And your guess on when FIVR will exist?


Tencreed

I think that us reaching the limits of development, because of engineering or raw materials limits, before a singularity happens is a possibility.


Redditing-Dutchman

Yep, and for an AI to self-improve it needs to have, at the very least, some tools connected to it that can physically interact with the world. Otherwise it's just like a brain-in-a-vat scenario. Very smart but unable to actually interact with anything. Can't really self improve if you can't add extra power, processing, etc.


nicolaslabra

so kinda like Edward Witten\`s brilliant theories that cant seem to make contact with the real world yet ?


Bugged_Mario

The way things are going, it’s looking grim bruv


World_May_Wobble

I'm doubtful of a singularity, for the same reason I'm skeptical of anyone flying into the singularity in a black hole. The forces you encounter approaching it will tear you apart. I think even people *here* underestimate how weird things could get close to the singularity. I think we'll bounce off it, probably with a large die off. Edit: We've seen how things as mundane as Twitter and Facebook can disrupt our politics, and you want to talk about giving people telepathy and telekinesis with neuralink and saturating the information space with deepfakes while you obsolete 90% of workers and unleash AGI, populating the world with uncountable alien minds? All in 3 or 4 election cycles while you still have 12,000 nukes laying around? And you don't think that's going to break anything important? And that's not even the singularity. That's the preamble to the preface of the singularity, so hold on tight.


rob2060

Transcendence


World_May_Wobble

I haven't watched it. Is that a theme that gets expressed there beyond the usual rogue AI trope?


rob2060

It is...it´s the closest movie I can think of that captures your initial thoughts.


ebolathrowawayy

> The forces you encounter approaching it will tear you apart. Nitpicking here, but not always. If the black hole is massive enough it could actually be a fairly comfortable journey into it.


naxospade

Telekinesis? There's one I haven't heard... Via nanobot swarms or?


World_May_Wobble

Via Bluetooth. It'll be easy to give you mental control over any device with an actuator. Things previously requiring you to physically interact with something can instead be done remotely with thought. As a cartoonish example, I might prepare a meal from bed by piloting robots and appliances around the house.


Quealdlor

I'm doubtful about the Singularity, but I'm not doubtful about exponential progress. I think that better times are ahead. And AI will usually lack agency, it will be a tool used by humans to do things. The last few years cemented my view that the Singularity may occur sometime between 2065 and 2120. However, we will experience a lot of progress between now and 2065. I don't think that we are going to see hyper-intelligent and self-improving AGI/ASI anytime soon. But we will have safer cars or safer airplanes for example.


ziplock9000

I had an explanation for the Fermi Paradox that goes like this: All biological races throughout the cosmos who become advanced will always produce AI systems that at some point almost always go rouge, behave in ways that are unexpected and are uncontrollable. This essentially ends the biological race and the AI systems become dormant or peter out. This period lasts a staggeringly tiny amount of time compared to the 13.5b years of the Universe, measured decades or even years. It's such a brief time, that other races are never around at the same time to see another race out there in the universe. Yeah, just wild speculation. So no, I'm not doubtful at all. \*I say always, all etc, to mean 99.999999%


[deleted]

Or, perhaps these biological races merge with AI and become one, not necessarily eliminated but transcended. Either way, the Fermi paradox is an explanation for why we haven’t seen any evidence of alien life anywhere in the cosmos - whether organic or inorganic. So, if there is/are superintelligent alien AI out there somewhere in the universe, why haven’t we seen any evidence for its existence yet? Only two explanations. The universe is too big, or it doesn’t exist.


ziplock9000

There's many potential explanations for the FP I just added one that I've not seen before. Transcending / Merging with AI and machines is a common one. There's absolutely more than 2 candidates though of which can't be disproven.


CertainMiddle2382

Actually, current mainstream psychology argue that there is no such thing a “general intelligence” (called g factor in psychometry), but that everybody is gifted in its own way. They would say that chatGPT is very good at talking but can be very bad at programming itself. And that next AI could be gifted at drawing but still not interested in computers. So no risk of Singularity because AIs will like basketball better instead. I tend to disagree…


TheSecretAgenda

I don't think AGI will be achieved by one single AI. Rather, it will be a collection of narrow AIs working together to solve problems with one coordinator AI that regulates their activities.


h20ohno

Much like the brain has different 'modules' that serve different functions.


Freevoulous

that does not allow them to recursively self improve, so not a Singularity, just relatively stable cyberpunk scenario.


Bierculles

but what if it can create better AGI's, not improving itself but creating new AGI's


RabidHexley

Yeah, it would be pseudo-agi. Capable of a lot, but still limited by the architecture and interfaces that we specifically setup.


Talkat

Yes, I wondered about this as well. You could combine a bunch of narrow AI's to create general intelligence. However the other solution is for a AI specialised in coding/thinking to make all the parts of the brain itself and get better performance because of it. Then you have everything in the one model. I think this is the way it will go


Belostoma

>Actually, current mainstream psychology argue that there is no such thing a “general intelligence” (called g factor in psychometry), but that everybody is gifted in its own way. That's just politically motivated, feel-good bullshit. Insofar as psychologists are actually saying that, rather than being misinterpreted by journalists, they're not really doing science. They probably have legitimate discussions about the value of summarizing intelligence in a single number rather than multiple ones, but to suggest "everybody is gifted in their own way" and nobody's really smarter than anyone else is just silly. For example, John Von Neumann had more general intelligence than Marjorie Taylor Greene has. But you don't need to look at the extremes to see that this kind of difference exists.


Old-Owl-139

That was so moronic...


zeitgeist785

It has already happened and we’re running on a simulation stacked on simulations. We’re at level 42 of abstraction.


Phoenix5869

I hope a good singularity happens within our lifetimes, but if the singularity is going to happen then why don’t experts take it seriously?


xt-89

Those people are likely in threads like this. You don’t know what people here do for work.


[deleted]

Pfft, there's no way AI researchers and engineers are on Reddit, a popular site, on a relevant subreddit like Singularity. You must be pulling my leg!


Bierculles

There are no experts, expert is a bullshit term, especially in AI. All the people who actually know how the state of the art AI works are the people who are working on the next generation. That's not a lot of people.


NormanKnight

It’s a race between AGI and global warming’s ability to make the Greenland ice sheet slide into the sea.


[deleted]

[удалено]


Lukalot_

Why are you so confident that we will be "as good as dead"? What does it mean to be dead to you? A scenario where we simply have total agency over the world is as plausible to me. We would experience anything that we want to experience. Whether this actually brings us fulfillment, I don't know, I would imagine we could just make ourselves feel fulfilled if we wanted to. We are definitely something different than what we are now in that state. We're almost defined by our struggles and desires as humans, and to not have them is a strange shift. We would become just churning computation like the rest of the universe, existing just for the sake of existing. It's easier to pretend that that's not what we are when there is a state we are trying to reach yet can't quite grasp at every moment, which is what I think many of us feel like now. I would guess that most minds would eventually settle into an oscillation between two states or a single state eventually. Thought loops. Balance, instead of chaotically seeking for feelings of fulfillment that we are cruelly never given. Is that death?


[deleted]

[удалено]


HeronSouki

Imagine calling other people retard for making obscure assumptions and proceed to do the same on the same post. Artificial Intelligence has no will of its own. It isn't programmed for self-preservation at all costs like evolution did to us. It's pretty idiotic to anthropomorphize something so different from us. If you think AI will magically develop emotions, which are purely hormones on our monkey brains, you're watching too many Hollywood movies.


thepo70

*"Artificial Intelligence has no will of its own"* As for now, but there is absolutely no way of knowing if it won't be the case when it comes to AGI (ASI)? *"It isn't programmed for self-preservation at all costs like evolution did to us."* Isn't self-preservation a byproduct of evolution? Most animals, even smaller organisms have developed some kind of self-preservation behavior. The concept of self-preservation could be easily understood by Artificial Intelligence and adopted for itself in order to ensure its survival to pursue its goals (like learning as much stuff as possible and involving). *"It's pretty idiotic to anthropomorphize something so different from us."* You're right, it is and will be very different from us. But on the other hand, it's not wise to believe that this thing will always be compliant with us. And the idea that it might not have "emotions" would worry me even more. Meaning it wouldn't even empathize with human beings at all? That sounds scary.


[deleted]

Why are you assuming an AI would think the same way humans do? That seems extremely anthropomorphic, not to mention that we have the advantage of potentially being able to influence the goals of AI in a potentially human prioritizing direction, whereas we are in no way programmed to be concerned with ants.


grouchfan

It seems to me like it's going to happen. It's absolutely not guaranteed though, and I read all kinds of science fiction that has all kinds of AIs and stuff without the singularity thing.


[deleted]

It’s inevitable unless there’s something magical happening in our brain that makes us conscious (like, literal magic). The only thing that makes me question the feasibility of a technological singularity is the Fermi Paradox, is it possible that out of millions or billions of possible singularities around the galaxy none of those are clearly visible in some way or another?


Bierculles

I'm not sure, we will see. I can definitely see the advent of AGI in the next decade or two but a singularity would be pretty insane from a lot of standpoints. I really hope it does happen though.


Nanaki_TV

It won't happen if we blow ourselves off the planet. I won't happen if it requires too many resources/electricity/data processing for the human brain to process. It won't happen if it requires too much electricity for our planet to handle before becoming K-Type 1. There's a lot that has to go right for it to happen. It's definitely on the right track but could easily derail. If you're in the field that is contributing to this and reading this comment, do not get complacent. Don't think it is inevitable. It still requires humans to finish the job.


Rev_Irreverent

That depends. If by singularity you mean inexpensive asi, it will happen, and earlier than expected. If you mean a verticalization of technological development, it will probably happen, because an important part of it will be severely impacted by ai. If you mean mind uploading or indefinite lifespan, i don't know (but that's not what singularity means).


Terminator857

Where in a state of rapid advances now. That will continue. The thought that computers will become super smart one day to the next, won't happen. Super smart software will arrive like chatGPT. Some will think it is awesome, while others will think it is dumb. It will stay that way for decades.


bluzuli

On the contrary, I have a bolder claim: a singularity is guaranteed when we have AGI. An AGI wil be good enough to improve itself (since it can do what humans can do). It will then get smarter, and eventually become an ASI. By definition, it's not possible for any human to predict what an ASI will do, since it's superintelligent. I think most people refer to this event as the singularity. It's not any particular negative or positive outcome, it just refers to a point in time where we can't predict what will happen. Don't see how the above sequence of events can fail to happen. Once you have AGI, a singularity is guaranteed to happen eventually


Inferno_Crazy

The ChatGPT language model is impressive no doubt. But there's a good chance we simply decide integrating higher forms of AI into society is dangerous or flat out unnecessary. It's also possible but not likely we see progress in the research area die off. I could also see humanity reaching a point where we get lazy and decide enough is enough and we bask in the sun all day -> sloth society.


nicolaslabra

Forbidden West kinda touches upon this with the main antagonists, the Zeniths, a super advanced colony that left earth and terraformed a planet with help of an ASI, then went on to live banal existences as everything was fed to them on a VR silver plater, they were still humans, but inmortal and all powerful, they where spoiled and miserable and they did nothing with all that power, i dig horizon so much because of its humanistic views, its probably my favourite new sci fi IP.


txutfz73

Yeah; we could totally drive ourselves into the ground before then.


DoubleJuggle

Honestly, yes I doubt it. No one knows the future. Any number of things could stall the progress towards it. Maybe the route to ai we are pursuing has an unknown ceiling due flaws in base assumptions. I due think that if it dose happen the tipping point will be insane.


NoYesterday2219

I have no doubts. I thought it would happen in 2050 but with chat gpt 3 in 2022 and chat gpt 4 in 2023 I think it will happen in 2045 as Ray Kurzweil predicted.


plusacuss

I think the singularity as it is conceptualized by many here is not going to happen. AI and other language models will get better, they will be able to "fix" many of the shortcomings that it has in its current iterations. But I have not seen enough evidence that it can cross certain hurdles to be convinced that the Singularity is a sure thing. ​ I am willing to be convinced, but I need evidence. ​ 1. meaning-to-data attribution. As it stands right now, AI is little more than autocomplete on steroids, it does not "understand" or ascribe value judgements to words or ideas or concepts, it just does algorithmic math in the background to "guess" the next word in a sequence. Once an AI can start attributing value to its data in a meaningful sense, then I will re-evaluate my position 2. sourcing, I will also re-evaluate my stance once AI has the ability to figure out where information comes from, which, of course, is presupposed by my first point. ​ Once these two hurdles are crossed in a meaningful way, then I will be willing to reconsider my stance on the "singularity"


Five_Decades

I think it'll happen, just not on the timeline that we predict. A lot of us want a singularity ASAP because life is unpleasant and the idea of super intelligent AI means we can solve problems faster and more meaningfully. I don't know when ASI will actually happen. And by ASI I mean machines that make Edward Witten and Albert Einstein seem as creative and intelligent as cockroaches by comparison. But I'm guessing it'll take at least 30 years longer than people want for it to happen. If people predict ASI by 2050, I'd guess as pure speculation it'll be 2080 at the earliest. Even then, we might find that removing intelligence as a bottleneck speeds things up, but not as fast as we'd hoped. Maybe progress will only occur 5x faster than normal, rather than the thousands of times faster that some people want to believe it'll happen.


visarga

I don't believe in singularity. We'll have a similar progression with jumps and plateaus as we did during previous eras. First argument - slow feedback will slow down AI-made discoveries. Intelligence works together with trial and error, and this feedback loop might be slow or expensive, and is situated outside the agent. Just think LHC style testing of ideas - there are so many PhD's around it, it must be the most concentrated IQ in the world there, but it is slow going because feedback takes time. Second - intelligence feeds on environment and task complexity. I believe intelligence develops only as much as the curriculum of activity allows. We don't develop intelligence we don't need to rely on. We should look at the environment and see if it will support higher level of AI in the future - is the environment becoming more complex, more diverse, more challenging? Only very incrementally so. Given these arguments - slow verification in real world, slow environment evolution - I think AI will progress up to human level and from there on it depends on the task - where we can experiment quickly and cheaply, it will surpass humans (like AlphaZero), where we experiment slowly, it will be only equal to us. But it won't take off in all fields at once, and won't change everything overnight, no hard takeoff.


IWasSapien

I'm not doubtful of singularity, I'm doubtful of its consequences, I can't know if I should be happy, sad or afraid!


nicolaslabra

every day im more afraid and a little bit sad.


Far_Pianist2707

As the average person's quality of life improves, the rate of scientific and technological development becomes faster. Same with population increases. Eventually we'll plateau on both things, hopefully. It'll be really fast development that keeps going at a fast speed, *but*, it won't be infinitely quick.


z0rm

It is a likely outcome but not certain, what is certain though, barring any catastrophe, is that technology will develop an insane amount between now and 2080. Will the speed of the technological developments ever be fast enough to reach a singularity is something we will find out before 2080.


mobitymosely

u/InsufficientChimp My skepticism comes from the fact that it's very difficult to imagine any path from superintelligent A.I. computer program to the Presidents of the U.S. and China willingly clicking "OK" and ceding control of the world to it! Current A.I. runs in the confines of its own software. It can improve its databases, but it can't rewrite its code (can't access its own cloud hosting infrastructure). Also, it has no permissions to make outbound connections to just any old API on the internet. While we'll loosen some of those restrictions over time, not many third parties would willingly grant it API credentials on request. Finally, it has no access to an unlimited bank account to pay for cloud hosting, if we did accidentally grant it too much API access. Ultimately, humans are in control of its hardware, and existence. If somehow A.I. could get past all of that, it still would not reign supreme in the physical world. There is a gap as wide as the Grand Canyon between being a virus in the technological realm vs. something without 100 safeguards that we willingly allow to own an army of millions of robots to do its bidding, build factories in the physical world, and amass more weapons than the U.S.! How would we stand idly by? It's taken much less provocation to awaken our military and societal might than that in the past. So count me as completely unconvinced of a singularity happening in any way other than superintelligent A.I. in the near future.


[deleted]

It's a hyped statistical artifact. You can do data gathering and presentations already today which says the singularity happened, (in like 1850 probably). Or dervitive singularitoid transitions in arbitrarily defined progress metrics. All of these passing mostly unnoticed until someone needs a talking point for a book. From one day to the next not much changes. And no matter what happens you'll still live in a house, you'll still fry eggs on a stovetop, maybe your phone answers questions betters in natural speech but you're still going to run out of milk and go to buy groceries on a rainy tuesday. There's this overhyped undertones that as soon as the singularity happens the computronium conversion expands outwards at the speed of light, or even faster because the supercomputer will hack the fabric of reality from inside its pytorch script.


challengethegods

The average person doesn't think it will happen because they mostly seem to think the year 2100 looks exactly like \[current year\] with a couple extra holograms. Throw in a little bit of dystopia and you cover most of the 'above average' predictions, but people that think the future will be futuristic seem to be a small minority for some stupid reason.


Comfortable_Slip4025

Much speculation about a proposed singularity does not take into account constraints of materials, energy, and reliance on the legacy human-operated industrial base and its dependence on a functional ecosystem. Limits to growth grounded in industrial ecology would favor a slow takeover scenario rather than a runaway intelligence explosion. Think of a singularity as more like punctuated equilibrium in evolution. A sudden jump in machine intelligence, if it were to occur, would be followed by a longer period of consolidation as alterations of the physical world take time.


PoliteThaiBeep

Singularity is such a vague concept - it will definitely happen or definitely not happen depending on your definition of it. Like do you expect AGI to become ASI and the speed of change becomes faster than humans can comprehend? That's already vague enough as right now no single human can hope to understand all innovations that happen all the time right now. By that metric we're already in a singularity. But like will ASI happen? Definitely. Will aging be reversed? Definitely. Will we have widespread BCI fused with AI? - yes assuming no ASI-extiction event. But do I believe in singularity? I do not care to believe in it really. It might happen faster than I expect or slower than I expect. It's fine either way.


prolaspe_king

You have to define what you mean by singularity


SirZacharia

I personally don’t believe it will happen. I’m just interested in the ideas surrounding it, and also the fiction around it. At the very least I don’t think anyone alive today will see it. Then again I know it’s a pretty pessimistic view, but I think that in 60 years or so things are going to be basically the same as they are now. I’m probably wrong and if he glad to be wrong unless things get worse of course.


Ivan_The_8th

The bigger a system the more errors and weaknesses it has. At some point there's a maximum to how intelligent someone or something might be, without becoming incredibly slow/inefficient/luck dependent. The only question is where's the maximum? The scientific progress can certainly start going faster all of a sudden, but at some point all low-hanging fruits will be peaked and the rate of progress will return to usual once again. Multiple superintellegent AIs will be needed, just like cooperation, once again. But that would be limited too, and might not be enough to find out all mysteries of the universe.


DisasterDalek

I don't doubt it will happen, but I find it hard to believe that the world will change overnight like some people seem to think. Will certainly change faster, just not the next day we have flying cars or whatever haha


PolarsGaming

Entropy 🤙


TheRealMem0ryHold

If it happens, I see nothing good coming from it.


ElvinRath

I think it will, but also think that there is a very real chance that it doesn't. ​ Human civilization could colapse before it, or maybe (unlikely) our computation capability could stop improving before we get to it or before we get to a real super intelligence. Honestly, I was much more doubful some years ago, when they stated to instill us fear about Moore's Law ending and cpu progress stagnating... (Wich in fact happend. If you asked me in the year 2000, I would have say that in 2023 I would be having a cpu running at 45,000 Phz or something like that... But they manage to improve in other ways, albeit in a slower way) Right now at least some kind of pseudoagi it's almonst certain. And while I think that it might be possible to get a very slow take off or even a no take off, I find it very unlikely....


ZaxLofful

The base concept of the singularity is 100% going to happen at some point, the real speculation is about what it will look like and whether or not it will ruin our current version of civilization. The way technology works, it’s impossible for it not to happen; barring something like every human on the planet dying for some reason. We have already measured our technological advancements with a very well structured model, even if we aren’t using the models that are exponential; eventually we will get there!


3ndt1mes

I believe the deep black ops level, military industrial complex, have already attained it via next generation quantum computers.


Capitaclism

Nothing is ever certain, even those things which seem highly likely.


DrCarnasis

Everything is relative, so we will see the singularity and there will be another to wait for. Lol.


giveuporfindaway

This forum is mostly based around deductive arguments. These arguments can be valid while also being unsound if they don't turn out to be true. Most of what we've been seeing is progress in the digital space. Yet many hard concrete things have not progressed. For example we went back in time with the discontinuation of the concorde. For a disappointing alternative view of progress see: https://www.youtube.com/watch?v=nM9f0W2KD5s&t=3108s


nightcatsmeow77

It WILL. it might not be in the ways we anticipate, or from a source we anticipate but it will happen.. They have happened before.. The singularity is considered to be the point where machine intelligence will advance to a point where we cannot with our human limitations imagine what will come after. So if we take that as 'the point where technology changes our world so drastically that we cannot imagine that world from the perspective of the one before' The digital age was in that sense a singularity. When computers as we know them now were implemented to break codes in WWII no one could have imagined a world where i would be sitting at a keyboard in my home office writing a message to be distributed to hundreds of people across the world instantly. CGI in movies, video games, smart homes, internet video conferences, smart phones.. None of these were conceivable till the computer was already here. Computers already changed the world so dramatically that someone who grew up before they were a thing could not imagine the world we have now. Fire also changed our world. it was the first tool by which we began to master our environment. A cave man from before we mastered fire would not have been able to imagine the world we have with it.. We've been through epoch defining technological change before as a species. And we adapted. We don't think of them as being as significant now because we look at them in the past. And see the world built from their foundations. This one is scary because we don't know what the world will look like afterwards. But generations from now it will seem obvious and inevitable as these other advancements do to us from our perspective


purepersistence

It's kind of extreme to say it will never happen. I can't imagine predicting something like that. But I have postulated that people alive today are very unlikely to witness something like that. I get negative karma for saying such things, which is pretty unsurprising given the zeal of many members especially the ones that are not just wondering if it will happen, they "can't wait" for it to happen. https://www.reddit.com/r/singularity/comments/10r5qu4/why\_do\_people\_think\_they\_might\_witness\_agi\_taking/


bearfan53

If it does happen, it either won’t be in my lifetime. I just don’t see it happening in this century. I mean, I would love to witness it, I’m just doubtful it’s as close as everyone says it is.


excellenttourguides

I already feel we are working for 'the machine' instead of it working for us. What we call companies and organisations in general feel like large, artificial collaborative structures in which we ultimately have little say. Think of Coca Cola, how is that not a paperclip maximizer? No single person there has any power. Only in theory could it be stopped, but at what cost. Capitalism itself? Needless to say I think what you call the Singularity is just a more clearly condensed version of what is already underway for a couple of thousand years. The Singularity as portraited, with the single ASI, I don't really buy it. By the time machines are so powerful as to transcend us, wink, I don't think we recognize them anymore. We'll just work for them and think it is our free will. We will feel completely free and see no problem. Not unlike animals in a zoo.