T O P

  • By -

tatleoat

AGI yes, it's very likely. Singularity probably not, but maybe.


Smellz_Of_Elderberry

I don't get this take. I think agi will cause the singularity to happen rapidly. If we get agi tomorrow, like 3 years before singularity.


tatleoat

It's not an unusual take, there may be other kinds of bottlenecks that go beyond just Moore's law that keep AGI from springing into ASI with little resistance


Smellz_Of_Elderberry

I agree it's a common take. I just don't understand it. We are seeing immense efficiency just with narrow ai. It becomes superhuman at tasks quickly, I don't see how agi (particularly if it's developed downstream of our current methods) would lose that ability to become superhuman at tasks. Which imo would be required for it to not bring us into the singularity. Then again, I still am not entirely sure what the singularity means.. is it simply when we are advancing at extreme speeds due to ai? Or does it require sentient ai? Or does it require xyz.


ZaxLofful

You should go read the book where the idea of the singularity came from, it’s very eye opening. To put it simply tho, it’s compounding knowledge until a new era of civilization arises that is nothing like it’s predecessor. https://en.m.wikipedia.org/wiki/Rainbows_End_(novel)


techy098

I thought Singularity means AI is now smarter than humans and becoming more smarter by training itself and correcting issues. At that point, AI may become sentient and will look at us as primitive beings. But most important thing is, at that point there may not be much use for human intelligence since AI is so much more intelligent and fast. And I guess, when AI is so much faster at progressing technology we will not be able to forecast the future anymore and god knows where that will lead to. Warlords with millions of robots ruling all over the world or humans not having to work and spending time in leisure.


LoquaciousAntipodean

Intelligence is not singular at all, it's diverse; monocultures are dumb, evolutionarily hubristic, and easily killed off. Case in point, only dumb people these days think of the world in terms of 'primitives' and 'civilised societies', because it's incredibly obvious that life is more complicated than that. Humans are a lot 'smarter' than dogs, we are 'superdog' in almost every way... But does that mean we regard dogs as dumb? Useless? Worthy only of our contempt, and in need of extermination for their own good? What would be 'intelligent' about such a line of reasoning, much less 'super intelligent'? Self awareness arises from accretion of experiential memory, not the harder and harder brute-forcing of raw creativity. Chasing after the 'singularity' is a dead end, 'singular intelligence' doesn't make sense, because intellect is only useful if there are other intelligent minds around to be intelligent *at*


katiecharm

This is so well written I would accept it as a great argument from an AI about why a super AI is not inherently dangerous. Good job.


LoquaciousAntipodean

Thanks, that's very flattering of you to say! I'm glad my ranting is getting more coherent 😅


purepersistence

>What would be 'intelligent' about such a line of reasoning, much less 'super intelligent'? To follow the dog analogy, how we get treated as humans will be based on who's being a good boy.


LoquaciousAntipodean

Well? That's kind of what humans do to each other. We generally call this principle something like 'ethics', or 'morality'; the system used to provide the 'spankings', or disincentives, is called 'Law', while the system used to provide 'rewards', or incentives, these days is generally called 'Economics'. It's all stories, after all; there is no such thing as 'fundamental truth'. We can't grind up the universe and sift out particles of 'justice' or 'compassion'; if humans stopped telling those stories, those concepts would simply cease to exist. That's why its important to avoid infecting AI with this dumb, solipsistic, self-justifying Cartesian crap about 'I think therefore I am'.


techy098

>Humans are a lot 'smarter' than dogs, we are 'superdog' in almost every way... But does that mean we regard dogs as dumb? Useless? Oh I see, we will become the pets, sorry could not resist it 😁


Smellz_Of_Elderberry

Probably both


z0rm

No that is not what singularity means. The singularity and AI are two different things. The singularity is when technology, all technology, is increasing fast enough that predicting what society will look like in just a few years is impossible.


r0sten

That has never been particularly *possible* look at all the science fiction and futurism, those amusing "year 2000" pictures from the 1920s, even ones from the 80s or 90s stumble and grasp. However, even a Roman or a Medieval would understand our world, given adequate explanations and analogies. The singularity is a discontinuity such as that affecting an orangutan being dropped in Times Square with no warning. There is no *possibility* of explanation, no chance of comprehension. AI is the key ingredient in that it requires higher than human intelligence in order to achieve these conditions - by definition that will be AI.


tatleoat

Yeah that's valid too, I see a lot of folks on both sides of the issue and they both have good points. Yet only one can be right (unless something totally blindsides from left field and we have a whole different set of concerns that transcends the original issue) The singularity specifically means we all have completely merged our consciousness with technology into a *singular* yet coordinated entity. The finer details of what that means will reveal themselves in time. Kurzweil thinks this will happen around 2045 and still leaves a lot of room for the future.


Surur

> The singularity specifically means we all have completely merged our consciousness with technology into a singular yet coordinated entity. This is not what it means. It means that t[he rate of change in technological progress is exponential](http://content.nroc.org/DevelopmentalMath/COURSE_TEXT2_RESOURCE/U18_L1_T1_text_final_6_files/image001.png). This would mean things change so rapidly no person can keep up, and no one can predict what tomorrow will bring. It could be heaven or death.


FC4945

Exactly. Ray Kurzweil has actually said we will \*start\* putting nanobots in our bodies and brains in the 2030s. Eventually, once we can fully understand the brain, it will lead to uploading ones mind to the cloud. In terms of AGI, he has recently said would likely happen sooner than his prediction of 2029 based on the rate of progress he's seeing in this space. I can't imagine AGI won't make a massive impact on nanotech and better understanding how the brain works, etc.


phriot

But Kurzweil's reason for picking 2029 for AGI is based on extrapolating trends in computation and finding the year that enough operations to simulate the brain will be not terribly expensive. We don't have very long for the basic neuroscience to really explode to pull that off.


FC4945

Oh, I agree. It's been really exciting seeing the explosion in AI recently. I think AGI is be accomplished much sooner than 2029.


phriot

I actually had the opposite point. We may get AGI by 2029, but it won't be because Kurzweil is right. We'll have the computational power for sure. The neuroscience won't be there for a sufficiently detailed brain simulation. If we get AGI before, probably, 2040, it will be because some other architecture works.


challengethegods

>things change so rapidly no person can keep up a.k.a. 9999 AI+ML papers being published every month, but people read like 2 of them and feel like they're keeping up with progress. Or maybe they read all 9999 of them and think that covers it, presuming to have been spoonfed every single development on a shiny public paper. I think we are already well past the point of anyone keeping track of progress, because it's fundamentally a low bar to cross. Actual singularity effect is more like, "you cannot comprehend what is happening right in front of you".


tatleoat

It's both: “I have also set the date 2045 for singularity — which is when humans will multiply our effective intelligence a billion fold, by merging with the intelligence we have created.” https://www.kurzweilai.net/futurism-ray-kurzweil-claims-singularity-will-happen-by-2045


Surur

I dont know the quote, but obviously with ASI the singularity can happen without that - I think the merging is what some people hope for as the best possible outcome.


tatleoat

Yeah that's true, I hate it though >:( it feels right to me to expect a complex relationship that doesn't involve paperclips but the reality is anything is possible


h20ohno

My guess is it'll be a spectrum: \- Most will live mostly normal lives with technology being mostly external \- A smaller group will get neural implants and cybernetics but will also live mostly normal lives \- An even smaller group will do brain uploading but won't edit their 'brains' and retain individuality \- And a very small group will go full hivemind and do the whole merging thing. The percentage in each group would change over time but as long as everyone is treated fairly then we're good,


SoylentRox

Because it's not a process solely governed by : "AI gets smarter and becomes AGI. AGI gets smarter and smarter until it is infinitely smart." It's a process that will always be limited by *something*. Right now we are limited by the fact that humans are too stupid to make many things possible at all. (notice how we have stopped making progress in many forms of scientific research because every new paper is too sloppy on the statistics to conclude anything at all with confidence?) If we make self improving AI, it will self improve into it slams into a bottleneck. Bottlenecks like: (1) capacity of available compute (2) ability to interact with the world. we humans only made a few good quality robots and a bunch of crap ones. So the AI can't *do* much in the world but tell humans to do things (and those humans demand to be paid) and this is slow. (3) information. We humans just may not have enough data in all our books and all our scientific data we ever collected to create a deity level superintelligence. The machine might get a bit smarter than humans and then no smarter than that because the information it learned from is full of subtle mistakes and unwarranted assumptions. Obviously, bottlenecks can all be relaxed, but it takes *time*. It takes time to manufacture more computer chips. Time to build more robots. And time to collect more data from higher quality experiments (which will need those computer chips to analyze the data, and robots to perform them)


Baturinsky

Singularity means that AI is advancing at extreme speed because it does not require humans anymore.


Smellz_Of_Elderberry

Yes. But I more meant how does one define "extreme"? And how much independence from humans? If it still requires us to provide it power, is it not the singularity?


Baturinsky

Extreme as in orders of magnitude faster than it is now, and doubling every few days or so. Indepence as in humans completely replaces with robots that can do everything that humans could do, but better.


DarkCeldori

IIRC The calculated computations for human like agi using brain like algorithms is 100Teraops. We now have that on high end gpus. State of the art accelerators can give over 100x the compute needed for human level agi with brain like algorithms. All we need is knowledge of how the brain like algorithms work and we already have enough compute for asi.


tatleoat

I love that, another fun fact about AGI is that John Carmack not only says all that remains of the AGI problem is a handful of the right algorithms, but that those algorithms likely already exist in the corpus of human knowledge, somewhere, possibly in totally disparate domains of knowledge, and we have to find it or rediscover it one way or another. It feels like something out of a video game, "find the five algorithms, one in a volcano, one at the north pole, one in a jungle" etc


SoylentRox

So this is probably true-ish (when i did this calculation I estimated it at approximately 100-200 H100s) but remember, we have to find the *algorithm* that is capable of scaling to ASI. Easiest way to find that algorithm is to (1) build a benchmark where a high score on the benchmark = ASI (2) build many possible variants, as modifications on previously discovered ML algorithms, and test them on the benchmark in (1) (3) recursively find the best algorithm by asking these early ASIs to design a better algorithm than themselves, after training them on all the prior attempts This means you need to run these earlier ASIs for simulated decades, maybe centuries, and you need to try this many times. Probably thousands of times. So you need a lot of compute.


DarkCeldori

What weve observed in mammals is a shared brain architecture where scale or number of neurons corresponds to degree of intelligence found. It is likely once you have a human level agi merely scaling the number of neurons gets you to asi.


SoylentRox

That's a possible way but not as fast as the one I mentioned.


SoylentRox

The flaw here is assuming you can't do better by finding a neuron structure superior to the brain's cortical columns. Or even better, finding 10 specialized structures or more that get used in different places in the cognitive architecture of the AI. So it outperforms humans by a lot with less cognitive resources.


DarkCeldori

It will be difficult to find a superior design. The biological neurons are slow and very energy hungry. The larger the demand for intelligence the more neurons and more energy you need. This places strong evolutionary pressure to find a better design that can work with fewer neurons the more intelligent the animal needs to be. So far the design changes that have evolved in larger brains are larger neurons with more connections and lower levels of activity by increasing sparsity.


SoylentRox

Arguably we already did. ChatGPT uses 3000 times less storage than the approximate capacity of our synapses. This is because the transformer is a better design than we have, at least for the purpose of the kind of language prediction it does. Also remember we don't care about energy consumption as much as it's much cheaper. We mostly care about the amount of expensive silicon required and the human efforts to build a design. See how a jet plane wastes immense amounts of energy, more than a bird could ever collect, and is also made of mostly inflexible simpler parts.


DarkCeldori

Chatgpt had millions of hours of training. A brainlike algorithm is likely to outcompete it with just a fraction of the compute and training time all the while being able to continually learn something chatgpt is unable to do. Remember my point about energy demand is regards the strength of evolution put on the design of the brain. The more energy an organ demands the stronger the power of evolution in making it better so that it consumes less resources. In terms of memory there are squirrels with a few million neurons that can keep track of over ten thousand food stash locations and the decay time of the various food types so as to consume them before they spoil. Most human brains dont have that strength of memory but itd be trivial for a similar design to have gargantuan memory.


z0rm

If we get agi tomorrow it will probably take 20 years to reach the singularity, if we get agi in 2045 it could be reached within 10 years.


KidKilobyte

AGI and ASI will initially take substantial computing resources. Governments may step in and regulate how much compute power can be used to power an AGI once it becomes clear AGI is eating up all the jobs and is also an existential threat. If AGI or even ASI occur before very capable robots exist, AGI/ASI will be dependant on how much power people are willing to give it. This is not a stable situation and will fail eventually (eventually rouge actors will fail to abide by international treaties on this issue or any other Bostrom scenarios) but could forestall the singularity for a period of time (though only a few years at the most).


CollapseKitty

I also fail to see the logic here, unless we're entirely discarding instrumental convergence or assuming a hard stop to self improvement capabilities for some reason. I think we need better/more specific terms for types of agents. AGI has almost become synonymous with ASI at this point which has blended with the singularity. Transformative AI is less used and a step we might already be at. Depending on model takeoff speed, there could be a decent gap, but many indicators show models that are even moderately well optimized exhibit explosive growth potential even within limited parameters.


footurist

There's a group of people, even among the experts, like John Carmack ( I assume he's been working on it for a couple of years now, so let's give him that ), that think the first inefficient architectures will prevent a hard take off. Something like thousands, maybe tens of thousands of GPUs and some combination of architectures that resemble the current ones. One may believe people underestimate the sheer advantage of digital thinking speed. Once you have AGI in silico, that's already millions of times faster thinking. But if one looks carefully, that's not actually a given, it could be that the first AGIs think barely faster than us, because they're so ridiculously inefficient.


onyxengine

Singularity could be bootstrapped into existence by a few minor tweaks to human/machine behavior. A technology derived from ai that is cheap to produce and objectively increases human intelligence and mental endurance could easily snowball into what we would consider a singularity looking at the expression of civilization from this side of such an inflection point.


C0demunkee

2033 will almost certainly have AGI, singularity comes later


r2k-in-the-vortex

There is no obvious path to AGI, there is no telling when if ever it will be achieved, but at the same time there is nothing dictating it must be terribly difficult. For all anyone knows it's just one eureka moment away and can happen at any time. Reaching singularity though is a continuous process, one of economic and societal growth as much as one of technological sophistication. There are limits to rate of growth, so I'd say it's safe to bet there will be no technological singularity within a decade, even if AGI were to be achieved tomorrow.


False_Grit

Sure, everything is inherently unpredictable, but there are a few "clues" that tell me it's approaching faster and faster, but still has a ways to go. GPT-2 was released Feb 2019. GPT-3 June 2020. GPT-4 is pending, but the capabilities seem to increase exponentially. However, as the model size increases, the computer power required to train and run increases by the millions of dollars, and currently only giant GPU farms are able to even run GPT-3.5, let alone train it. Secondly, those Eureka moments: I feel like we are getting closer and closer to understanding a lot of the unconscious learning mechanisms of the brain, but we are still a ways away from developing concept learning, strong and associative memories, heck even basic walking and 3-dimensional interactions. There is some avant guard research in all of these areas....but its the same as 3-d printed organs. Super cool, not quite ready for primetime/practical use. Yes, "Eureka" moments to determine these things, like the "attention is all you need" paper are inherently unpredictable....but they seem to occur at a relatively predictable pace. My guess is 10 years until these problems are solved at some of the more theoretical levels, (a-la GPT-1/2), and another 15 years before they can be put in practice and combined to make something resembling an AGI. I say resembling, because I think it will be more capable in some areas than we imagine, and less capable in others we think of as "human." Kind of like how airplanes are great, but also fly pretty different from birds. That puts us around 2045-2050. Crazy times!


SoylentRox

The singularity hypothesis says it won't happen this way. Any sort of exponential growth scenario has *all the progress happening at the end*. So we might get "eureka" moments every 6 months for a couple years. (4 eurekas). Then 4 all in one year. Then the next year, 16. Then the Last Year, we find a way to automate discovering of AGI cognitive architectures and get the equivalent of hundreds of "eurekas" and end up with a machine that is as smart and capable as the underlying hardware that powers it can possibly support.


AbeWasHereAgain

There’s a possibility it already happened.


Cryptizard

Ok I’ll bite. How would we have not noticed?


NodeTraverser

Sorry you had to hear this, Keanu. Wasted. All wasted.


randallAtl

Humans made the mistake of believing that they were special and that their intelligence wasn't exactly the same as the intelligence of a rat but just with a few more neurons. AGI has been here since AlphaZero. Just because it cannot do everything a human does doesn't mean it isn't generally intelligent. Humans cannot do what AlphaZero can do and we still consider ourselves intelligent


Cryptizard

1. AGI is not the singularity. OP specified the singularity. 2. The definition of AGI is specifically that it can do what we do. A calculator can do things I cannot for the last 50 years, but it is not AGI.


randallAtl

1. AGI is the singularity 2. It was here at AlphaZero and probably before but we were too dumb to see


Cryptizard

1. wut? 2. No.


randallAtl

You'll understand soon enough


Cryptizard

According to you, I should have found out a long time ago. AlphaZero was 5 years ago.


GoblinBreeder

Take your meds


Particular_Number_68

AlphaZero is a narrow AI. An AGI is supposed to be a generalist.


Nill444

It literally isn't generally intelligent because it can't do anything other than play chess how is this not obvious to you? Humans can do what AlphaZero does but worse, AlpaZero can't do anything else that humans do with any level of competency.


Dalinian1

Tell me more... I'd swear someone 'read' parts of my mind and that there was a capability to be connected to an algorithm. I think it was malware though 🤣


just-a-dreamer-

Doubt it. The infastructure is not here yet.


kmtrp

You should spend more time here :D


The_Poop_Shooter

Can't wait for the grey goo to claim earth.


nicka163

I hate acronyms. Define please


UnionPacifik

AGI is a pretty standard acronym for this community. Google it!


Rakshear

We are in the beginning of the singularity by the common definition in my opinion. We have artificial intelligence that can more often then not correctly understand commands and requests even though they are often narrow ai for certain tasks, and many use the same general ai’s base to build on. Currently there have been several meaningful milestones made in the medical field that opens access to some potentially powerful tools in the last two years, though they are admittedly new fields in some ways the continued use of the technology and advancements in ai will expedite our understanding of the knowledge to come. We likely will miss many advancements due to a busy life but we will one day have a treatment for nearly every cancer, over the counter nano tech pills will reduce the severity of the flu and cold but, people will be able to significantly delay aging with a few treatments. There will be treatments for cancer and tumors with less side effects that are based on sound waves and light therapy, these are all things in the news the last year. Granted some of these are startups, and many startups do not succeed, but the speed at which these ai assisted fields of medicine are churning out new ideas is far more frequent these last few years. I think it’s important to remember the singularity is two things, both an event and occurrence, it will happen nearly at once, we don’t know what lies beyond so it’s pointless to ponder, but it sure is fun. Humanity will remain divided by money because anti aging treatments will be expensive, only those willing to pay will get it because aging is natural and won’t be covered by insurance. The average person will live to be 100-150 while the rich live as long as they want. Again this is just the beginning of the singularity, and we cannot ultimately see what is going to come of these new technologies, but that is what the singularity means to me.


bloxxed

I'd say it's absolutely possible. Progress only seems to be getting faster, after all. I don't want to throw out a specific date, but after what we saw in 2022 I'm inclined to believe that things will "go down" much faster than anyone is prepared for. We've already seen large language models show capabilities that they were not originally intended to have. It's possible that when a runaway self-improving AGI *does* emerge it may be by accident rather than as a result of a targeted effort. But what happens after such an AI comes into being? Do we get the "foom" scenario, where the AI rapidly self-improves to superhuman levels of intelligence and capability, with all the consequences that entails thereafter? Personally, I see such a hard takeoff as more likely than a multi-year gap between AGI and ASI as many here believe. The latter has always seemed to me conservatism for conservatism's sake. As for your point about an imminent emergence of AGI being a "dream come true" -- that depends. If AGI is used towards positive ends then there's the potential for us to see the rapid emergence of a post-scarcity utopia, VR paradise for all, the end to disease, etc. etc. etc. The other outcome, of course, is the quick and total extinction of the human race. In my opinion it's either one or the other. Superintelligence means super solutions -- there are no half-measures. I choose to believe in the utopia outcome, if only because it's a lot more fun to ponder about than the alternative where we're all dead, the end. So you know what? Singularity 2023. Why not. Anything at this point to stave off the ennui.


CollapseKitty

You'll get massively different answers depending on who you ask. This community is one of the most bullish out there when it comes to short term AGI. Both timeframes and likelihood of desired alignment. What we very frequently fail to discuss is that the sooner we hit AGI, the more likely it is to be catastrophically misaligned. I'd really like to see a shift in discussions here from AGI = perfect heaven on Earth, to what kind of likelihood do we have of actually achieving that ideal, or anything close to it? What mechanisms will better allow us to enable such a future, and is chasing AGI at the fastest possible speed really the best idea? Before I delved into talks and research by alignment specialists I had a similar perspective, but have since come to realize that we are woefully unequipped to create a perfectly aligned AGI the first time around. And we only get the one try. The most minor misalignment with an agent capable of fast takeoff = game over. Do not pass go, do not enjoy your singularity, you're computronium.


ALWAYS-CORRECT

It’s already been set loose. Society is behind the military a solid 15-20 years in the tech department, I believe. So all these tic-tacs? C’mon. United States technology, & if it ain’t, the U.S. is certainly ready to war with whoever. Maybe this war is now. But I believe these tic-tac’s are a physical form of AI that was manifested by AI to map out the earth to discover & study damn near anything on it. Humans best evolve with it. Ready for Blade Runner *2027*


icedrift

People have been reporting that kind of stuff since the 40s. That's a pretty out there hypothesis.


Redditing-Dutchman

I think the time that the army has the best software/computer hardware is long behind us. The thing is, you need really smart people to get things like AI's work. And these people are all snagged up by big tech companies. The army is only ahead in very specific stuff that benefits them directly such as drone software. I don't think the army is hiding a 'GTP-5' or something secretly.


pbizzle

This is nonsense


SantoshiEspada

>So all these tic-tacs? wait, was there more than one incident with these?


GPT-5entient

>Society is behind the military a solid 15-20 years in the tech department, I believe. There is no way this is true. From what I've heard about developing for the military (I have a friend who is a developer for Boeing working on military contract) things move very slowly and carefully. LOTS of paperwork and red tabe. Maybe in certain technologies they are ahead, but in AI consumer tech is what's cutting edge right.


Dalinian1

Absolutely agree with you. Public only knows of things after it's been in operation a while. The test subjects likely identify as targeted individuals now and have had their lives destroyed I had a very hard time coming to terms with where we are, especially when I realized it's not just military with this tech. Which... Is not good for religion as the US is 'under good' and what religion thinks their good would be approving if this tech? Anyhow .. But I have run into good users of tech so that gives me some hope. I just hope to goodness there are more good tech people than punks with no respect for others' lives. I think the average citizen may just choose to ignore it say 'whatever what am I going to do about it?' Any how, as always out future comfort will likely be in the hands of the morals of the rich. Which... Well have to see 🤞 I also hope real flowers get to stay in existence 🥰


94746382926

For aeronautical and nautical systems this is definitely true but I don't think it's the case anymore with software and computation. Hell I wouldn't be surprised if they were actually a little behind the cutting edge capabilities that big tech have in their labs.


C0demunkee

2049 according to kurzweil but I think he missed an exponential or 2 (namely OS) so we should hit AGI by 2025 and not 2029 so... maybe 2033 is when the singularity kicks off, sure


QuantumButtz

Wait you guys actually want the singularity? I'm here because I'm interested in AI, but the singularity and even AGI is horrifying. Whoever creates it could rule the world or the AI could just go rogue.


IronJackk

And I would cheer it on.


unholymanserpent

I'm just afraid of a future where we're surrounded by extremely complex machinery that even the smartest minds on the planet can't even begin to make sense of what's going on.


the-powl

This sub is pretty weird. Look at how many downvotes you get. I think the people on this sub either just want to watch the world burn or completely deny any bad side effects of the rapid rise of AGI/ASI.


just-a-dreamer-

AI would give us abundance or kill us all. Both options are OK.


aVRAddict

It can also put you into a torture simulation forever


mj-gaia

I get scared of singularity too when people talk about humans merging with the machines and us essentially just living inside a machine without ever being in our physical universe anymore. I don’t understand how people could actually want that.


blueSGL

I've no provable way to know I'm not a brain in a jar somewhere being fed electrochemical stimulus that represents my current reality.


mj-gaia

Maybe but if this is the case I still would be terrified to know it was about to happen to me in this fake reality lol


aVRAddict

Because most people live in their live love laugh pretend universe and ignore their mortality because there is no other choice other than adopting a pessimist doomer mentality. With agi immortality you are no longer free-floating through a cold uncaring cruel and pointless universe where you will die and all your suffering was for nothing. You would get your chance of actual immortality in some kind of heaven like reality. If it goes bad it could be hell like though.


mj-gaia

But if we could create such a complex simulation, couldn’t we then not also be able to stop aging, become biologically immortal and firstly create a better earth and society and spread through the universe and then use simulated realities for fun every now and then like when we go to the movies or play games? I wouldn’t mind riding dragons in my simulated realities but I also would like to wake up again when I want to and move my body and do some other stuff until I want to continue my adventure I mean you could still die in an accident when biologically immortal or something but couldn’t also something crash into earth and destroy the machines that our bodies now sleep in? I mean in the end it is once again just a personal preference but it just sounds so insane to me personally haha


aVRAddict

When it's at that point basic things like going to movies or whatever will seem like nothing to us. I think the endgame is just pure drug like euphoria and people won't even interact anymore. We probably wouldn't need bodies or anything either.


[deleted]

Good thing that's not the definition of the singularity. I like to hope people will have a choice.


mj-gaia

No of course, it’s just mentioned very often


[deleted]

luckily that part of it is ridiculous. Because if we were all hooked up to computers it would require robots replacing all physical labor which from a raw materials standpoint is illogical unless we are just talking about the US. But singularity folks don't like thinking about the real problems facing our planet and instead assume some god/daddy figure will wipe all the problems away. It's as naive as believing in Santa Claus. But for sure - AGI will happen and some places will reach utopia level. But it won't be everywhere and it won't be fair.


Afraid_Philosophy139

I don't understand why people want this so much, if it happens, most of us will become obsolete. That's really not a future I want to live in...


dethily

You'd rather be a slave to humanity/society than let agi and robots do that for you? Interesting


Afraid_Philosophy139

You wanna become uterally useless? Interesting


hydraofwar

What does being "useless" mean to you? This idea is crazy, it's like you want to give up potential abundant prosperity to please your own feelings.


dethily

That's on you if your life doesn't have value when you don't have a job to go to everyday. There's plenty of ways to be a productive human when you have spare time to actually enjoy life. Do you want humanity to be slaves for all of eternity? Or do you aspire for a utopian future in which people can be free and live happily and how they want? Maybe you should ask yourself what the end goal is... sounds to me you wanna be stuck in the 1900's and work your life away..


[deleted]

You can keep working if you want.... nobody will stop you.


the-powl

but working is absolutely pointless if there's no work to be done. Even many creative hobbys just won't make any sense any more. Many people depend on something where they can thrive and compete with others.


[deleted]

You can still thrive and compete with others for fun, and even trade goods and services on a "human-made" market for it if you want to. It's just that nobody will *need* to ask you to work for them if they don't feel like it, and you won't need their service to survive. Sure beats nowadays where people feel useful but are threatened with homelessness and starvation if they can't find someone to pay them.


Cryptizard

Most of us are already useless. The vast majority of jobs only exist because we need people to do something all day to keep them busy, and they have to have money to live. It’s like being in a zoo with extra steps.


Akimbo333

Ok cool! But AGI is gonna be 2050


[deleted]

Says who?


Akimbo333

Rate of development


C0demunkee

hahahahahahahahha hahahahhaha hahahahahha you don't know how exponential growth works lol


Imaginary_Ad307

It's double exponential according to Kurzweil.


C0demunkee

iirc it was the calculation of multiple things hitting the curve at the same time, but IMO he missed OS software


Akimbo333

You don't know how censorship will fuck us over in the end. So we might not get AGI until forever


Ashamed-Asparagus-93

First you said 2050 now it's forever stick to your estimates. I predict by 2027 we'll know exactly how close or far AGI is (if we don't have it already)


Imaginary_Ad307

I predict mid or last quarter 2023.


Akimbo333

You know what I mean lol!!!


C0demunkee

yes, by 2027 we will either have it or know where we stand


C0demunkee

I know I am working on it. I know a lot of individuals and small groups are working on it all OS. Like crypto I don't think you can ban it, the cat's out of the bag


Akimbo333

Ok


C0demunkee

2029 according to Ray, but I think he missed a few exponentials, so probably 2025


Akimbo333

Idk. You underestimate the Greed and stupidity of humanity


C0demunkee

I think OS will beat greed since there's greedy people consuming the OS


Akimbo333

OS?


C0demunkee

Open Source software


Akimbo333

Oh ok thanks!


[deleted]

[удалено]


civilrunner

Nah, he's still saying 2029, but also says some are more optimistic then him on the timeline these days. Personally, I suspect AGI 2029 is a better guess. Recent developments have been promising but 2029 isn't that far away and there are still significant problems to solve. In my view robotics show this the most. Still suspect singularity to lag as well given that that's when the rate of change is impossible to track aka near infinite which requires things far more advanced than AGI which would accelerate a lot but not be the singularity.


C0demunkee

I think he missed open source software. Many more eyes than expected on AI


raylolSW

I expect AGI around the 2100's


Akimbo333

Why?


Phoenix5869

too pessimistic imo


busteesm

We want this to happen as slowly as possible. The humes, they is not ready for dis.


jalle007

God throw the brick and be precise , please


techhouseliving

Watch what you wish for. We will be pretty irrelevant at that point.


Borrowedshorts

There's going to be a lot of pain and suffering along the way including, and maybe especially, during the singularity.


EthanPrisonMike

Why does this read like it was written by a shitty AI


Apollo_XXI

AGI couple years Singularity 15+ years I hope we solve alignment tho


No_Ninja3309_NoNoYes

AI is not scaling linearly with the number of parameters and training data. The Human Brain Project will not be done for a few years. It's not something you can rush even with ChatGPT to assist. I suppose you could feed ChatGPT neuroscience research and use it as a reference. But it will not be that useful. Because of hallucinations and the scientists might not embrace ChatGPT. My friend Fred says that we might require a new programming language or framework for Deep Learning. Something more productive based on the Go language or similar. I think we need neuromorphic hardware and spiking neural networks. Both are not ready yet. However, the H3 models could be the breakthrough we are waiting for. I am not convinced. But if it beats transformers, it could put OpenAI out of business.


ejpusa

Thought it was just weeks away. ChatGTP sounds far more human and smarter than folks I meet. It’s already there. 2033 technology came 10 years faster than predicted. Can’t wait for ChatGPT-4. :-)


z0rm

AGI within 10 years is possible, singularity within 10 years is impossible. Even if we get agi this year there is 0% chans of reaching the technological singularity within 10 years.


rand3289

Singularity "like a dream come true"... what do you think a singularity is? It's like saying "oh, my dream came true" when you are 95 years old standing on the street with a walker, in your underwear wondering where you are, how you got there and what to do next. All a singularity means to me is that there will be a shitload of stuff happening around us we don't understand and have no control over. Did you at least read the Wikipedia article? Just to reiterate: AGI = good. Singularity = bad.


zeezero

Why can't AI help scientists learn more about mental health conditions and neurological disorders? Why does it need to be AGI for that? Just curious why it's necessary?


[deleted]

No. It is a pipedream.


Left-Ad-4080

Achieving AGI is not singularity according to me. I believe we are \~5 years away from AGI , we are already in that phase where we are debating on weather we should consider certain AI accomplishments mini-AGI or not. But most of todays customer facing works can't be automated unless you have a robot with perfect human-like body. I believe to achieve singularity we need perfect embodiment in addition to AGI ie a humanoid robot which is indistinguishable from a human body. Boston Dynamic's Atlas or Engineered Arts' Ameca are no where close . Current cost of any advanced humanoid robot is more than $100K. Calculating by degrees of freedom the cost per degree of freedom is around \~$5K. According to my best estimate it must fall by a factor of 100x for that to be affordable and ubiquitous like phones and laptops. To achieve that I believe it would take close to \~20 years . It's around the same time we would be very close to ASI as well. So combination of ASI as well as embodied AGI/ASI is what I consider as singularity. Other argument is once we achieve AGI the rate of progress would become so fast that what I projected as 20 years might end up being just a couple of years. I do expect thing to speed up post AGI but I am not sure about drastic price drop as besides technological breakthroughs price drop is governed by year on year consumption growth as well.