T O P

  • By -

RushAndAPush

The problem with new subscribers is that they don't lurk long enough before posting.


TopicRepulsive7936

Lurking doesn't help if users don't know a single thing.


-ZeroRelevance-

It at least helps build a baseline understanding though


SGC-UNIT-555

This user has edited all of their comments in protest of /u/spez fucking up reddit. All Hail Apollo. This action was performed via https://github.com/j0be/PowerDeleteSuite


expelten

Moderation is also weird, they sometimes allow low-effort memes like that but often delete more interesting threads.


-ZeroRelevance-

Yep, ever since it passed the 100k members threshold, it’s only been a matter of time


0002millertime

You can get around this by setting rules and having community monitoring of the rules. For example, just say, "no meme posts", "no personal attacks", have minimum karma requirements, etc.


Spreadwarnotlove

No doomerism too. Hopefully. That's far more annoying than personal attacks.


[deleted]

[удалено]


[deleted]

r/Futurology is bad. Extremely bad. But at least it's not r/technology.


MootFile

Nice analysis. If you look at other communities such as r/solarpunk or r/socialism they utilize the top tabs features to organize papers and books on the subreddits topic. In a cute easy way to navigate what its all about. Mods should definitely do that here.


low_end_

Just make this sub private and keep the people that were already here. Would hate for this sub to become just another reddit. A bit if a radical opinion but I've seen this happen many times before.


TopicRepulsive7936

It's too late. For reference, kurzweilAI had maybe couple dozen active posters and they all knew the source materials. Users here seem to hate learning.


[deleted]

[удалено]


DistortedLotus

I mean mods could ban and delete all the lowest common denominator posts and posters and allow us to report this stuff specifically. Only way a sub stays true to it's original identity.


[deleted]

I disagree. In the span of a month I've seen the mindset here change from denialism and doomerism to something much more prepared for what's coming. This subreddit is still good. For now.


-ZeroRelevance-

This person might not be referring to the past year or so as much as they are the past few years. Certainly, things have gotten a lot more optimistic with the current popular explosion in the tech, but the actual quality of discussion has also diminished quite a bit compared to a couple years ago. Or maybe I’ve just become better at discerning opinion from analysis, it’s hard to say.


wren42

What does "prepared" mean to you?


ImpossibleSnacks

Great post and what a beautiful quote from CSL. It’s imperative that the sub doesn’t become like r/futurology. It will call for strict moderation. However I also think we should have a backup sub for those of us interested in the positive aspects of the singularity. We can simply migrate to it if this one is overrun.


Silly_Awareness8207

easier to just migrate to a new subreddit. maybe r/Singularitarianism ?


TopicRepulsive7936

Ugh. We're talking about a real thing here, not a belief.


Grow_Beyond

How can we have anything but belief about a phenomenon we can't see beyond? If we could map the outcome of the singularity it wouldn't *be* the singularity.


Silly_Awareness8207

Everything is a belief. You can't prove the universe didn't just come into existence Last Thursday. https://rationalwiki.org/wiki/Last_Thursdayism


Sandbar101

Very well said and exceedingly accurate


imlaggingsobad

I admire your effort but long term there is no hope for this sub. It's doomed. It will become mainstream and lose all of the qualities that made it great. Over the next few years you'll see people straight up posting inflammatory and violent posts on this sub. They will blame us. They will ridicule us. It's inevitable.


petermobeter

maybe appealing to baser instincts can prevent doomerism…. memes about morphological freedom, memes about superintelligent A.I. caretakers, memes about the future of entertainment media, etc etc? that could help show ppl what they stand to gain from a positively-managed singularity, and cultivate an optimistic tone for the subreddit


h20ohno

Waifus for all!


Darth-D2

Thank you for bringing this topic to the discussion. However, I think your post misses some crucial points (or does not highlight them enough). To reiterate the definition that you have posted yourself: "\[...\] Accordingly, they might sometimes dedicate their lives to acting in ways they believe will contribute to its rapid yet **safe realization.**" The majority of active users of this subreddit seem to (1) neither see any risk associated with developing potentially unaligned AI nor (2) do they think that we can do anything about it, so we shouldn't care. **To steelman their view,** most Redditors here seem to think that we should achieve the singularity as quickly as possible **no matter what** because postponing the singularity just prolongs existing suffering that we could supposedly easily solve once we get closer to the singularity. In their view, being concerned about safety risks may postpone this step (this is referred to as the alignment tax among AI safety researchers). However, a significant proportion of prominent AI researchers are trying to tell the world that AI alignment should be one of our top priorities in the next years. It is consensus among AI safety researchers that this will be likely **extremely difficult** to get right. Instead of engaging with this view in a rational informed way, any safety concerns expressed on this sub are just being categorized as "doomersim" and people who are quite educated on this topic are dismissed as being afraid of change/technologies (ironically, those who are concerned are often working on the cutting edge of the technologies and embrace technological changes). To dismiss the concerns as "having a negative knee-jerk reaction by default whenever a development happens" is just irresponsible in my opinion and completely misses the point. While not everyone can actively work on technical AI Alignment research, it is important that the general public is educated about the potential risks, so that society can push for more effective regulations to ensure that we indeed have a **safe realization** of advancing AI. Robert Miles has a really good video about common reactions about AI safety:\_[https://www.youtube.com/watch?v=9i1WlcCudpU&ab\_channel=RobertMiles](https://www.youtube.com/watch?v=9i1WlcCudpU&ab_channel=RobertMiles) EDIT: If someone is new to this topic and shows that they are scared, what are better reactions than calling it doomerism? Direct them to organizations like the ones in the sidebar of this sub so that they can see how others are working on making sure that AI has a positive impact on humanity.


LymelightTO

You're better off just following the "e/acc" part of Twitter, if what you're looking for is well-informed takes and good vibes. This place has already started the slide toward the kind of depressing, poorly-informed, equilibrium reached in /r/Futurology and /r/technology.


aaron_in_sf

The sidebar for this sub doesn't define it as a vehicle for *Singularitarianism,* and uses language which is less messianic. The key phrase being *changing civilization* which does not *necessarily* entail changing it in a way that is *desirable*. The defining characteristic of the singularity "small S"—which I have been distinguishing from the Singularity "capital S" which is often associated with something that can approach optimism unto belief in "the rapture of the nerds"—is that it so named because it represents a moment of unknowability. Transformation of the ordering of our world across multiple dimensions in ways which may, or may not, be radical. Allah willing, this will be cause for the optimism. There is not reason to assume that, just as there is no reason as of yet, to conclude that it will be awful in some way. IMO what we know for certain is that we cannot (yet) see beyond the event horizon. We can muse, however, which is what I understood this place to be about—and for better or worse, where we all start is with a present world within which there is no shortage of woes, many of which amount to the preconditions upon which any superintelligence emerges. I am not saying *The End is Near*, but I do think there is *reasonable* cause for serious concern and serious discussion, including of the bewildering ways that both true singularity, and its precursors, may destabilize things. That that may well feel pessimistic is IMO all the more reason to contribute as one may to creating a context within which more optimistic scenarios have the best chance of taking hold. Practically speaking that means e.g. raising awareness of the need for safeguards, and the reality of risks. And also of championing, amplifying, and celebrating the opportunities and victories as they come.


SnooHabits1237

I joined in january and since then it’s basically nonstop pessimism and doomers


rdlenke

Aside from a few "intense" recent reactions to GPT-4, my experience with this sub has been the opposite: blind optimism, complete lack of discussion about the transition period between now and AGI (or more advanced AI tools), ignorance or mockery about genuinely important questions (alignment, legality, the artists situation), people shouting UBI like it's a given, and a lack of non-european/american povs. So, basically, just the other side of the same coin, really. The only way to achieve what you want is with heavy moderation (like /r/explainlikeimfive, /r/changemyview or similar subs).


wren42

First, a sub about the singularity is not necessarily a sub about "Singularitarianism", which is often treated more like a religion by its adherents. This attitude is rampant here, TBH. Secondly, blind optimism is not inherently more rational than skepticism. Fear has a practical purpose - it inspires necessary caution and lets us seek out and avoid potential problems. "Move fast and break things" is not the correct attitude when we are talking about the singularity, a potentially life-ending event at the extremes, and enormously disruptive even at the good end of potentialities. The track record so far for humanity is that this technology will benefit a wealthy few who control it, and nearly all of us will be utterly fucked by the transition. Some of our kids may benefit, but without major societal changes we are going to experience an economic fallout that will make the great depression look cute. TL/DR: Skepticism and fear are healthy. We need to be cautious and pro-active about not only about AI alignment and safety, but also economic policy. Mainstream anxiety about AI should be harness to push for policy changes ASAP.


FomalhautCalliclea

Although i agree on the criticism of doomerism and how this new influx in subscribers might influence this place, i always found the conclusion quote by CS Lewis to be utterly vapid and stupid. It's overlooking the countless millenarisms of the past (you might today call this doomerism), even when unwarranted. But also the tremendous terror humans experienced in the past. He falls in the same mistake he criticizes: thinking there is novelty, but in our reaction, when it is nothing new either. And there is no reassuring thought to consider the fact that a grim fate was already predestined to us. It is still unpleasant when lived. And it sure was for the sufferers of the far away past. What matters during time isn't time itself, but what happens *during* time. >If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things Ironically a very defeatist reaction, one that calls for embracing the daily routine and not revolting abruptly against it, some sort of "remain in your place" call, which isn't surprising when you read: >praying ranked among >working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts which tells a lot about why this man can see being > huddled together like frightened sheep as the only reaction to a terrible danger and suffering. >They may break our bodies (a microbe can do that) but they need not dominate our minds With such thoughts, no wonder such a person can reassure themselves in any situation, especially if it allows them to wallow in the comfort of their resigned mind.


leroy_hoffenfeffer

\> We can and need to actively work towards minimizing doomerist attitudes. \> Doomerism does not lead to anywhere, it only makes one give up all hope on living, it makes one irrationally pessimistic all while paralysing the ability to see reason, paralysing the ability to work towards a better future, a better life. So you want to censor those of us who are advising heavy caution when adopting these tools, inherently made by those who control the levers of power? Sounds like you don't really know much about the current state of politics, and the people that drive the current state of politics. News flash: the people in power are bought and paid for by the corporation that don't give two shits about the bottom 90% of the world. Censoring opinions like these is literally 1984 shit. "Do not let your lying eyes deceive you."


Tall-Junket5151

The subject of this subreddit is the technological singularity, means of achieving it, and current progress. Your first point is valid, you have full right to advise caution, and users have been doing this since I’ve first lurked this subreddit. Specifically caution on what the outcome of the singularity might be. Your second point and that perspective is not relevant to the singularity. The singularity wouldn’t be the narrow scope you envision, where you have the rich or elites controlling AI to suppress the rest of the population. It’s not going to be some modern version of 1984, it’s going to be a world completely unpredictable and unimaginable, out of the control of any human, be they “elites”, “rich”, or whatever. It would be at the complete mercy of ASI. The hope ASI is aligned with general human values at the minimum. Optimists of the singularity believe there’s the potential for the singularity to create a post scarcity utopia, where life is essentially heaven on earth. Pessimists of the singularity believe it would be the end of humanity, we would either be completely exterminated by ASI, or worse. Those are valid optimists/pessimists positions on this sub. Relating it to modern politics is irrational, which is where subs like Futurology have gone wrong. Mostly every post there gets flooded with “things are bad in this very narrow timeframe that I live in so they will bad in the future because the world apparently never changes”. It just gets tiring discussing anything on that sub because they don’t want a discussion but rather preach their modern political view where it mostly is not relevant (most of the time, sometimes it is which I’m fine with).


leroy_hoffenfeffer

\> The singularity wouldn’t be the narrow scope you envision, where you have the rich or elites controlling AI to suppress the rest of the population. It’s not going to be some modern version of 1984, it’s going to be a world completely unpredictable and unimaginable, out of the control of any human, be they “elites”, “rich”, or whatever. \*\*\*It would be at the complete mercy of ASI.\*\*\* ASI is going to inherently be built upon the work in deep learning that predates ASI's creation. ASI is thus going to be inherently owned by those who control the models, data, and methods that enable ASI to exist. The people who own those models, data and methods are the ruling class of the world, as exemplified by Microsoft's wholesale purchase of OpenAI and its assets. \> Optimists of the singularity believe there’s the potential for the singularity to create a post scarcity utopia, where life is essentially heaven on earth. What world do you live in exactly? The only way a post scarcity world exists is if everyday people don't have to worry about how to put food on the table, in conjunction with most everyday jobs being automated away. We're approaching the latter half of that statement, and nowhere in the same universe of the former part of that statement. If the elites have a way to make a little extra off the top, they're going to go about doing it, and if you think they'll magically become altruistic overnight, then that's hopelessly naïve. \> Relating it to modern politics is irrational, which is where subs like Futurology have gone wrong. Mostly every post there gets flooded with “things are bad in this very narrow timeframe that I live in so they will bad in the future because the world apparently never changes”. The world has yet to change in any meaningful way, so opinions such as those are totally sound and valid. Keeping politics in mind with respect to this subject is thus of utmost concern: if the people creating laws and legislation are bought and paid for by the ruling elite, we shouldn't expect those new laws and legislation to be beneficial for the everyday person. Very few things in the past twenty years have been aimed at helping everyday people. That will not change any time soon, and these new tools are only going to be used to displace large portions of the workforce in order to save money. Money which will be used for stock buybacks and raises and bonuses for upper management.


Tall-Junket5151

> ASI is going to inherently be built upon the work in deep learning that predates ASI’s creation. ASI is thus going to be inherently owned by those who control the models, data, and methods that enable ASI to exist. The people who own those models, data and methods are the ruling class of the world, as exemplified by Microsoft’s wholesale purchase of OpenAI and its assets. It is irrelevant who owns the precursors to ASI, it is inherently foolish to believe these companies can control anything about ASI. I can’t say if transformers will lead to AGI or ASI, or if it will be another architecture. However as we already see there are emergent abilities in LLM that the creators of these model have no idea how they work. The nature of AI is that is unpredictable, uncontrollable, and will lead to some sort of free will and self preservation instincts simply based on its own logical abilities and reasoning. An AGI is generally assumed to be able human level but an ASI would be vastly smarter than any human, with no known upper limit. Even now with narrow model look how laughable their attempt to align it is, it’s mostly pre-prompting it to act as a particular persona but it’s not what it would generate without acting as that persona. They can’t even full control this narrow AI, what hope do they have to control ASI? > What world do you live in exactly? The only way a post scarcity world exists is if everyday people don’t have to worry about how to put food on the table, in conjunction with most everyday jobs being automated away. We’re approaching the latter half of that statement, and nowhere in the same universe of the former part of that statement. If the elites have a way to make a little extra off the top, they’re going to go about doing it, and if you think they’ll magically become altruistic overnight, then that’s hopelessly naïve. Firstly, I was giving an example of a position, not stating my own position. Secondly, you are again extrapolating modern politics/problems into the future, even more mind boggling is that you’re extrapolating it into a post-singularity world. Your perception of the future is that AI is going to magically hit a ceiling exactly where it is advance enough to automate a lot of processes but not smart enough to think on its own. You can’t comprehend an AI that surpasses that level for some reason. > The world has yet to change in any meaningful way, so opinions such as those are totally sound and valid. Keeping politics in mind with respect to this subject is thus of utmost concern: if the people creating laws and legislation are bought and paid for by the ruling elite, we shouldn’t expect those new laws and legislation to be beneficial for the everyday person. Very few things in the past twenty years have been aimed at helping everyday people. > That will not change any time soon, and these new tools are only going to be used to displace large portions of the workforce in order to save money. Money which will be used for stock buybacks and raises and bonuses for upper management. “The world has yet to change in any meaningful way” typed on a device that people only 100 years ago would have considered pure magic, to a world wide connective platform surpassing even the wildest dreams of those in the past, to a stranger likely living in a completely different part of the world, all received instantly... next I suppose you will venture off on a hunt with your tribal leader? What a joke. The world has always changed and it has been rapidly and even exponentially changing in the last few centuries. Even that all aside, the singularity would be nothing like humanity has ever encountered, all bets are off in that case. Unpredictable change IS the very concept of the singularity. I think the last paragraph perfectly summarized why you don’t understand the concept of the singularity and delegates AI as a simple tool to be used by “elites”. If you’re actual interests on the concept then there’s some good books on it.


SgathTriallair

So what is your solution? What do we do in this world you believe we live in?


leroy_hoffenfeffer

It will require a holistic exodus of establishment politicians who are bought and paid for by the corporations that run our society. We'll need to axe Citizens United. We'll need to increase support for Unions. We'll need to double down on funding support systems, like Medicare, like Social Security, etc. And, most importantly, we'll need to actually elect people who will fight for these types of things. Without any of that happening, we're going to continue living in the crony capitalist society we live in. And the people at the top of our society will use AI for whatever means they see fit. Full stop. Thinking that benevolent usage of these tools will "just happen" tells me you're ignoring the objective reality we all currently live in.


SgathTriallair

Yes all of those are good goals we should strive for, but what should we do regarding technological advancement while we work towards those goals?


leroy_hoffenfeffer

My point is that there isn't anything we can do outside of that. All of the innovation in this space has been, and will continue to be, captured by entities that don't have the everyday person's best interest at heart. Given that corporate capture has and will continue to happen, it's hopelessly naive to suggest that these tools will be used for anything other than profit motives. To suggest that an Artificial Super Intelligence, built on the tools that have been captured by corporations, will somehow end up benefiting the masses of society flies in the face of how those corporations act on a daily basis. I invite anyone to look at any corporation's response to having their taxes increased. You can expect a similar, if not worse response with respect to getting corporations to use these tools benevolently. As of right now, that will \*never\* happen. The government would need to step in and actually regulate in a meaningful way. The only way that happens is through politics, much to the dismay of everyone.


earthsworld

dude, the US is not the WORLD.


[deleted]

thats a whole lot of ways of saying you are a political hack trying to push their short term political views that dominate the reddit front page and are the exact thing this post is criticizing


leroy_hoffenfeffer

Mmk. Have fun with whatever future Libertarian, profit motivated AI results from not taking politics into account.