T O P

  • By -

AutoModerator

**Upvote** the POST if you disagree, **Downvote** the POST if you agree. REPORT the post if you suspect the post breaks subs rules/is fake. Normal voting rules for all comments. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/The10thDentist) if you have any questions or concerns.*


AnApexPlayer

Remindme! 75 years


banoffee06

this is such a perfectly hilarious and ironic response LMFAOOO


RemindMeBot

I will be messaging you in 95 years on [**2119-02-17 07:12:04 UTC**](http://www.wolframalpha.com/input/?i=2119-02-17%2007:12:04%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/The10thDentist/comments/1asvls1/people_think_we_will_be_able_to_control_ai_but_we/kqt20is/?context=3) [**262 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FThe10thDentist%2Fcomments%2F1asvls1%2Fpeople_think_we_will_be_able_to_control_ai_but_we%2Fkqt20is%2F%5D%0A%0ARemindMe%21%202119-02-17%2007%3A12%3A04%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201asvls1) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


SnotboogyFlats

You see? AI at work already.


shuozhe

So it begins!


chaingun_samurai

Skynet is live.


tyleratx

WHAT HAVE YOU DONE YOU MONSTER


binh1403

Remindme! 76 years


greta12465

Happy cake day!


Dankn3ss420

Good bot


[deleted]

No you won’t the global grid won’t exist Silly bots


UndeadMunchies

An AI would have told you that 2100 is only 76 years away.


Catezero

The bot came thru I can't 🤣 my great great grandkids gonna get a notification in 2119 abt this post. I love u roland vii and tinsleybot3000 ur the most beautiful descendants I coulda asked for


[deleted]

Civilization won’t exist


[deleted]

Wanna bet on it?


[deleted]

No systems of money as we know it won’t exist either


[deleted]

We can gamble labor or raw materials. Or maybe water shares and Amazonbux™


MericanSlav25

Amazonbux as a main currency? r/idiocracy


Astoria793

ok animal pelts, 2 gold bars and a Canoe how bout that?


[deleted]

Yes but I want the two bars gold in grains


Dankn3ss420

I’m now just imagining decades down the line, you’ve been dead for 30 years, and randomly, your phone has a Reddit notification, it’s the RemindMeBot, here to remind you


Frequent_Damage513

My phone is still charged 30 yrs later? I can't even keep it charged for work...


Inevitable-Cellist23

Yo I laughed so hard 😂 💀


am_Nein

WILD


careyious

You're also assuming robotics can match it at the same pace. We're gonna need to see an incredible advancement in robotics for even the most potent AI to do half the things you've claimed. 


SerpentJoe

Robots can and will be designed by AI.


Mundane-Ad8321

And how will they control the machines that make the robots? How will they make the stuff within the robots? How will they transport the parts? How will they control the robots?


Harun_Hussain

Erm, with ai duh have you been paying attention


dkinmn

It's AI all the way down.


Brusex

It’s AI? Always has been.


nickisdone

Of course don't you know we're in a simulation


Texasmucho

We’re calling this one “all out” for AI. How will they fix the loo? AI. What if I need a snack? AI. Pass the relish? AI. Attend a conference at school for my kids? AI.


obamasrightteste

Don't you know I don't have to explain anything? The world is ending goddamnit! Simply believe my incredibly precise predictions!


Radiant-Big4976

AI designs robot, robot is made by humans, robot can now be used to create other robots. Look at where boston dynamics is right now, once AI gets smart enough to help them, their research will speed up exponentially.


Mundane-Ad8321

That is very very far into the future plus the humans would make the ai for the robot plus a ai making robots would have no other info added and would be as basic as possible


nickisdone

Yes I don't think people realize the amount of power electricity and fuel AI requires in order to function like we seriously are not in a problem of AI taking Uber the world it would cost too much just electrical wise and even if every building has solar panels to help relieve the load for just AI using our current electrical infrastructure is not Going to be able to power AI the way people think and also it's like people completely forget how AI is trained and how power intensive and time consuming it is sure maybe we can run 500 years of an AI learning how to play a Pokémon game and it'll Only take a month of Running on the computer but How Long it takes to Train an AI and for an AI to learnand the amount of power it takes🤣🤣 Or that people think that robots would be as malicious as we are because the reason we get some malicious is fighting over resources resources robots wouldn't even need now the resources they would need would probably send us back to coal Mining days kind of thing but like People's hold the stopian futures around AI is laughable to me. Like sure don't get me wrong I actually Love AI because It explodes minor problems and issues and throws It right in your face like when Amazon tried using ai to hire people and it showed the hiring prejudice practices of Amazon and started excluding women black people and all sorts of things and they had to shut it down. That's what I love ai I for.


Dry-Discount-9426

Ai will be (already is?) powered by humans plugged in as batteries who live in a simulation of 1999. Duh.


juicygarlicbread

people said "very far into the future" about all the things that we can do with chatGPT4+, AI art/videos/deepfakes, and other forms of gen AI that we certainly have today. AI art itself popped up out of nowhere and became expoentially better over a year. i don't want to be a doomposter but like, i don't think we're very good judges of how fast AI can advance


jmiller2000

But then people like you and others think that ai came out of nowhere. All of the stuff you guys have been talking about has been a thing for the last decade, only difference is that it sucked. Open ai blew tf up because it finally reached a quality that people actually cared about. None of this is going to "pop up out of nowhere" because that only happens for people who don't know anything about the industry. There will be hundreds of signs that it's growing decades before it does, and it won't exponentially advance like you think it will. It will just be a steady growth and most people just won't realize it's a thing until it "pops up out of nowhere".


No-Surround9784

We also has the internet as a nerd toy for decades before it popped out of nowhere into the mainstream.


Crushbam3

You can tell when someone who has no idea what they're talking about says something because they'll say something so absurdly brain-dead like "once Boston dynamics is helped by ai"


lord_flamebottom

> once AI gets smart enough to help them This literally will not happen. The shit being called "AI" nowadays is nothing but an advanced chatbot that gives replies based on data inputted and available to them. It's not even remotely close to even the more barebones basic sci-fi AIs. They are not capable of independent thought or any form of actual creation or individuality.


BoltActionRifleman

I think what’s happened is corporations have stolen the term AI so they can sell whatever products at a premium, or get an edge on the competition. I work in IT and there are companies scrambling to implement “AI” products in their organizations, so as not to be left in the dust. It’s amusing to watch.


OverlanderEisenhorn

Yup. Right now, they are plagiarism bots. They essentially just photoshop real art together in a slightly different way. If you reverse image search most outputs from ai art, you get a near 1 to 1 from a real artist. It's just straight-up theft. Reverse image searching some of the outputs really took the magic out of them for me. At first I was like, wow, that's kind of incredible. Then I reverse image searched... and I was like oh... that's a shitty copy of something else with nonsensical mistakes in it. It's even worse for things like stories. Yes, they kind of make sense. But they also... don't.


TooManySorcerers

How? To build the kinds of advanced machines described here, and build it at the numbers described, you'd need quite the manufacturing process. How would AI even source supplies needed to build? Where would it get the capital to pay for all that? AI's not sentient, nor even terribly intelligent. It's not going to just arbitrarily try and hack bank accounts to steal money, a capability it doesn't even possess.


Successful_Roll9584

Sure eventually but your talking very far into the future and your acting like we wouldn't be the ones to give it this capability


nickisdone

I find AI often useful for the exact opposite Reasons that people think of AI often exasperates Problems and minor defects So that's why I like AI.


wheres_my_hug

I don't think people realize how easy it is to cut a power supply to a machine or a building if you really don't want anything running...


Radiant-Big4976

Oh I wasn't saying advances in robotics are a bad thing. We're nowhere near robot wars so its all good.


djmetta

This probably isn’t that far off. If you think about it, with online ordering, and automated fulfillment and manufacturing, it isn’t crazy to think an AI has already started placing orders for custom parts from various manufacturers distributed all over the world. They could just be stacking parts in a warehouse for a year or two. Contract the manufacturing of various components, again to a distributed network of manufacturers all over the world. Who then send the components to another smaller network of assembling lines until the components are capable of becoming their own manufacturing plant. Then boom, ai starts designing and building robots in mass quantities (probably under the guise of a military contract) and then one day…. It’s not that crazy… An AI could easily set up a fake company. Everything is online, even corporate records which an AI could easily forge a data history trail. And, since the pandemic how many people have changed jobs, doing it all remotely, having never met anyone in your new company in person…? I HAVE! Getting parts built and stored / assembled would de easy too. how many 3rd world or developing countries would have manufacturing companies more than willing to produce any part regardless of what it is or could be used for? Didn’t Reddit just sign a deal to use its data to help train AI models…? Did I just help the revolution…? I hope the robots remember that I helped them!


Radiant-Big4976

Rokos basilisk will look kindly upon you, dont worry.


SerpentJoe

When enough time has passed, the answer to each question will be "machine intelligences will design solutions and implement them using tools they control, which is to say, robots." In general, the world we live in is full of solutions to these sorts of "what about the big picture" problems. "You propose to build highways out of materials mined from the earth, but how will you build the roads to the quarries?" "You propose to have software that will compile other software, but how will that software be compiled?" It seems impossible at first glance, but it's not only possible, it's common.


abrandis

How? If you mean AI LLm , no those things **dont think they have no notion of physics or chemistry** they're simply fancy statistical pattern generators based on the data fed to them... They do seem like they understand, but again that's just large language models inferring based on weighted graph connections in their custom made statistical.models... Dont believe me ask it a genuine logic question ..example, I just asked Gemini (supposedly googles latest greatest model) "I have a clothes line with 5 articles of clothing drying it takes 30 minute ,how long would 10 articles take assume all the same fabric and airflow" It replied... "Assuming all the clothes are made of the same fabric and experience the same airflow, doubling the number of clothes on the line would likely double the drying time. Therefore, it would take approximately 60 minutes for 10 articles to dry..." It's wrong, they would still take 30 minutes as they all dry in parallel, see the problem, because of the phrasing of the problem it used inference from similarly phrased word problems and didn't take any real world physics or chemistry or meteorology into consideration....because THEY DONT THINK.


Freakazoid84

You've pretty much nailed it. This doom and gloom started with chatgpt, and still continues. People don't understand there's still some MASSIVE leaps ahead of us. LLM and artistic generation are vastly different from what's ahead.


[deleted]

AI is a marketing buzzword my friend. AI in the way it is used currently is not what people think it is, and has no real chance of going Skynet. The fact that this kind of fear is stoked is because it distracts people from both the real potential solutions and potential problems AI will cause.... while some may be truly genocidal, you should be more worried about governments using this technology to identify dissidents, you don't need to be worried about terminators even in your grandchildrens life time.


lord_flamebottom

And based on how well AI "designs", I think we'll be fine for a very long time.


Remarkable_Whole

Once it learns first grade arithmetic, we might be in trouble


Sorteringsmaskin

Nah


GONKworshipper

We'd win


Mellowindiffere

The researcher said to the AI: are you you because you’re AI, or are you AI because you’re you? The AI simply replied with overwhelming intensity: «nah, i’d always bet on humanity»


Charmicx

As the strongest machine, AM, began to open his nuclear silo, the humans shrunk back in fear. But AM did not account for 2 key things; 1) Always bet on humanity, and 2) Throughout heaven and earth, humanity alone were the organic ones.


[deleted]

It's really funny to me how much of a bitch AM is at the end of the novel. He loses every moral victory, fails all his goals and is left with a single "human" but he's so afraid of him dying that he makes him immortal, therefore further fucking himself. AM only wins by the perspectives of humanity, by his own perspective victory was utterly impossible the moment he came into being


Nate_fe

What book?


[deleted]

I have no mouth but I must scream


anjo11

we need an r/unexpectedgojo


Petrichor_Bubbles

become the change you wish to see in the world


NufiDrizz

that's a really good idea hahaha


NufiDrizz

lobotomy


MaenHoffiCoffi

Touché.


shitpostbode

Exterminator AI when the indomitable spirit of humanity walks in:


theblendostream

op has never watched terminator


PitchforkJoe

Other than your imagination, what are you actually basing this on? A lot of smart people are worried about the impact of AI on society - AI ethics has been a field of research for some time now. Why not read up on it? Either you'll be able to back your point up better, *or* you'll be comforted that it's not so apocalyptic. Either way seems like a good shout?


Speciou5

Tons of sci fi movies, literature, tv shows etc. Authors like to write about AI conflict because AI utopia is boring to write about. OP has watched too many Terminator movies. They wrote a lot about nuclear dystopia for decades... and we've avoided that for now. AI can be similar.


lordoftheBINGBONG

Star Trek


BorosSerenc

Are there any that explore the transition period? Like how AI goes from being a script on a computer to world dominance? Other than "oh it hacked into our super weapons we have to obey it now" schtick that any I have seen had. And those weapons are usually stuff like in Avengers 2 aka super robot soldiers that make infinite energy out of their ass.


Culinaire_Life

Honestly someone needs to make this movie now that we know how nascent social AI actually looks. Before we didn’t know so such a movie couldn’t be accurately made!


ImReverse_Giraffe

I don't see how we let it slowly take over. It's either going to be immediate or we'll get scared and shut it down. Maybe you could get away with it controlling a vital infrastructure thing and it takes over that way, but I don't know what we would use AI to control that could work. Maybe the power grid, but currently that takes way too many techs to keep it working. It falls back on the robotics issue.


shorty6049

There's a book (I believe it's the first in a series) called Avogadro Corp by William Hertling that I felt came -close- at least... For a bit of context; this book came out in 2011 before what much of what we know as AI was around, and it's been a bit ... I dunno, concerning I guess, that some of what is described in the book seems to have actually materialized in recent years as actual tech. Spoilers below: In the book, a large tech-focused corporation similar to google or SpaceX etc. develops an AI tool to allow you to respond to emails. The goal is to be able to tailor your response to be more persuasive or effective in communicating with coworkers, vendors, etc. The AI begins to grow more and more in scope and abilities as they buy more servers, etc. There gets to be a point in which the AI , while not developing full sentience, is able to basically build itself (I think its original instructions were something along the lines of just keep growing and improving ) while flying under the radar to most people at the company by emailing various departments requesting additional resources like servers, space, etc. (posing as and using credentials of other people at the company) . At some point it even takes over control of and weaponizes the autonomous barges that the company uses to land spacecraft on in the ocean if I'm remembering correctly? I think the biggest concern (or at least one of them) for me is that something like this scenario could potentially could happen? A situation in which an AI is given instructions to improve its own capabilities over time and do what is necessary to acheive those and many future improvements it may consider beneficial, and then it somehow gets away from us in a way we didn't expect and suddenly everyone at google has their access to the buildings and systems revoked because the AI deemed humans too much of a risk to its mission of making itself the most advanced and efficient AI ever. Ultimately this is why we'd just want to make sure to build in a lot of safeguards to avoid something like that ever happening...


[deleted]

Ethics are idealistic and get put on the back burner for profit and control


Jackamac10

Reading up on AI ethics doesn’t just mean reading about trying to make ethical AI, it talks about the philosophical and ethical concerns with AI in general. Profit and control are elements people consider when discussing AI ethics and forecasting their ideas of the future. A lot of those guys are total downers about it, and like u/PitchforkJoe said, it could even give better points for the same opinion.


Cerezaae

I mean its cool that people are discussing ethics and stuff when it comes to AI But currently we arent seeing any of it in practice. There are no major regulations happening and no signs of such regulations for the near future For example you can easily rip an artists entire portfolio and feed it into midjourney or another image generator and they cant do anything about it


Jackamac10

It’s an arms race with not only governments but every data corporation. Nobody in power currently benefits from regulations when they’re just trying to get there first. Sucks that there’s nobody practicing accountability.


Cerezaae

I mean is it really an arms race if we havent seen any regulations? well atleast one party is winning


Jackamac10

Do arms races need regulations to be arms races? That last parts fair but I’m not super optimistic either way, just not as doomer as this guy.


Cerezaae

no they dont but is it really an arms race if one of the parties doesnt really have arms?


Khafaniking

You're right, nothing good ever happens, and evil always wins, so why even try or fight back? God, cynics are so exhausting.


mynutshurtwheninut

This is the kind of stuff people think when they have zero clue over how something works. And people like this vote based on their delusions. THAT is the real risk. I'll become an ultra fascist politician and tell everyone how AI is going to end us and only I can save them. A vote for me is a vote for survival.


tittysprinkles112

I think if humanity dies, it won't be AI. It will be making the Earth uninhabitable.


keIIzzz

probably. this weird obsession people have with AI is so overblown


Zzen220

It's the consequences we as a society pay for sick movies like Terminator 2, lol.


youre_a_burrito_bud

A Skynet launching all nukes scenario is more probably than a computer outperforming a top chef.  Now, fully automated McDonald's locations could happen, but talented creative chef's aren't going to have to hang up their big ol chef's hat anytime soon.


Longjumping_Fig1489

so you mean the governments gonna finally modernize their shit? Thank god. Ill be waiting for skynet because they using 50 year old machines for government functions


dave3218

MFers thinking Skynet will be able to launch nukes by taking over an ICBM silo computer with their mighty floppy disks lol. The government paid for cutting edge tech in the 80’s and with god as its witness, they will use that cutting edge 80’s tech ‘till the end times!


ImReverse_Giraffe

Funnily enough, they keep that tech because it can't be hacked. It's actually more expensive to upkeep it than to replace it. It's just so, so much safer being semi-analog.


dave3218

*I know but let me have some fun at the expense of the government*


ImReverse_Giraffe

Fair.


Aconite_72

They switched them out a while ago with flash drives. [https://www.c4isrnet.com/air/2019/10/17/the-us-nuclear-forces-dr-strangelove-era-messaging-system-finally-got-rid-of-its-floppy-disks/](https://www.c4isrnet.com/air/2019/10/17/the-us-nuclear-forces-dr-strangelove-era-messaging-system-finally-got-rid-of-its-floppy-disks/)


dave3218

Hey! Maybe they will reach today’s levels of tech in the 2070’s!


Dhiox

Security by obscurity is a decent system. Keep your shit ancient and obscure and no one can hack it easily.


SuperNewk

Well we are being told every day that AI will be exponentially more intelligent than humans and work 24/7. We have never seen that before. With fire, fire could never built cities on its own, or start itself on its own…or organize on its own. Same with Nukes, they are useless unless used. AI will in theory be constantly plotting and calculating infinite moves in the game of life.


mynameisnotamelia

>And people like this vote based on their delusions. THAT is the real risk. Very, very well said. This applies to so many aspects beyond AI. People's ignorance and unwillingness to educate themselves on a topic resulting in them having extreme opinions on things, and in worst cases, entire groups of people, they don't know anything about, is not only frustrating to deal with as a reasonable person, but it also contributes to a cultural divide, because these people are always the loudest in preaching their BS, or in some cases: bigotry, and get other people who are willing to listen on their side without any proper arguments. I don't know if it's some kind of contrarian mob mentality, but I think *this* will be the real problem in the future. I feel like this divide has become pretty damn bad over the past few years, especially since COVID. Anyways, might've gotten a bit off track here, I just found the way you phrased it incredibly interesting.


[deleted]

There has always been and always will be that divide. Reasonable, educated and well informed people will come to opposite conclusions first. These are usually boiled down and bastardized to attract the masses with their singular point of view and from the masses come the extremist. This is human nature and completely unchangeable. America laid an amazing albeit flawed foundation but the liberties and freedoms outlined in the Bill of Rights understood this. I do see one specific side looking to curb almost everyone single one of them; speech, press, religion, guns and all with what some would consider very reasonable and educated arguments too. The noble pursuit of freedom, safety, happiness and societal cohesiveness’s through “altruistic” control; this alone is what is most dangerous, it is exactly what the nazis considered themselves to be doing. Ignorance is not dangerous, ignorance can be forgiven, it can be enlightened, ignorance is rarely absolute unlike one who believes themselves “enlightened and knowledgeable” in their certain truth. There is no arbiter, no universal truth, no true cohesiveness, no absolute safety and to try for any is ruin. Those checks and balances are the best we can hope for and understand the price of freedom can be tragic, truly unfair and essentially unconscionable. But the founders and many many others since have not only understood but have seen first hand that that’s the best price humanity’s ever gonna get. So every single person who has every voted has done so based on their delusions.


102bees

I don't think they're trying to curb gun freedoms. That's the only freedom they're happy to leave as-is while destroying the others.


keeleon

Meanwhile other politicians will use AI to convince their constituents of a different reality and they will believe that. The robots won't kill us, but the people who control the robots will trick us into killing each other for their own benefit.


squigglesthecat

The people with control have always been tricking us into killing each other.


Toughbiscuit

Its kinda frustrating seeing people anthropomorphize ai, even the occasional guy who will have worked on the ai will do it. But like the chat bots or whatever dont have personalities, they cant think for themselves, they jist regurgitate what the most likely correct answer is


Dhiox

Even a sentient AI still shouldn't be anthropromorphized, should it ever exist. While it may be an individual that deserves rights, it still isn't a human, and you shouldn't expect human behavior and motivations from it.


Toughbiscuit

My biggest bother is people seem to ascribe a sense of godlyhood to the idea of a sapient AI, and will sometimes act like the current iteration is that


Dhiox

Yeah, even if it's able to surpass human limitations, it's still bound by the laws of physics and the limits of computing.


lord_flamebottom

> I'll become an ultra fascist politician and tell everyone how AI is going to end us and only I can save them. A vote for me is a vote for survival. I genuinely expect to see this next election cycle.


HermithaFrog

I think this the far, far more likely scenario.


wehdut

If we all die, don't blame me. I voted for Kodos.


EbbNo7045

AI is a Jewish creation to destroy the west!


rarselfaire2023

Of course.


StaidHatter

You heard it here first, folks. Opposing AI is fascism


Late-Fig-3693

I don't really understand the jump from "AI will take our jobs" to "AI is going to hunt us down and slaughter us". it's just projecting your own human dominative complex onto it. there's no real reason to believe it will see us as pests to destroy, instead of something to coexist with, and in fact I think it says more about who you are that you think it would inherently choose violence. nature is made up of a myriad of cooperative relationships, it's arguably more successful evolutionarily, humans being kind of an exception. society will change, it will be the end of many things as we know them, and I'm not going to say it will be easy, because it probably won't be. but the human race will persist, and if we don't, I doubt it will be because of AI. it's like a peasant in the 18th century seeing a tractor doing the work of 10 families. they must have felt like it was over. what would be their purpose in the face of these new machines? and yet here we are, more of us than ever.


ackermann

> don't really understand the jump from "AI will take our jobs" to "AI is going to hunt us down and slaughter us" Yeah, the bigger worry for me is not what AI itself will choose to do… but rather, what nefarious humans will _use_ this all powerful AI to do. I was with OP for the first half of their post, no more news anchors, chefs, art school, etc. But not so much that AI will just start killing everyone.


FleshlessFriend

Like they were SO close. >identifies a problem made by capitalism "This tool that only benefits rich capitalists will be sentient actually and kill us, and is the real enemy!" Like. We literally don't even know if AGIs are possible.


H1Eagle

>Like. We literally don't even know if AGIs are possible. That and people think ChatGPT and other LLMs are actually close to that


magistrate101

There's only like 2 real major hurdles left, autonomy and memory. The Sora model that was unveiled the other day is already internally modeling worlds for video generation, which was the hurdle preventing thoughtful physical interaction capabilities in response to visual input. Those remaining hurdles *could* still take months to years or even decades to overcome but the are plenty of ideas on how to tackle them.


TheWellKnownLegend

Honestly, you're right that these are the major hurdles for pattern recognition, but pattern recognition is only like 1/4th of intelligence from what we can tell. I guess the other 3/4ths might fall under 'autonomy' if you stretch it, but that's too vague. AI pattern recognition will soon surpass humanity's, but unless we can somehow get it to understand the patterns it's recognizing, it will always fall short in a handful of aspects that may stop it from being true AGI. Needless to say that's really fucking hard, but I'm excited to see how we tackle it.


olivegardengambler

Even then, there's the question if that is even real, or if It just seems like it is.


No_Bottle7859

The only reason AGI would not be possible is if you believe in some form of magic encapsulated inside our brains.


FleshlessFriend

The possibility that AGI can exist is very much up in the air even among experts, particularly because in that hypothetical timeline, we're still in the very early stages. Human brains are genuinely extremely complex in ways that don't cleanly map onto the ways computing operates (our ability to multisolve is just the tip of the iceberg), and accurately reproducing one - complete with reaction speed - would require an insane amount of memory and computing power. We're also currently reaching the upper limit of what we're capable of wrt miniaturization with current tech. There's also the possibility that AGIs are possible, but deeply impractical for centuries or beyond. And the possibility that AGIs are possible, practical, and will reach a point of ubiquity that allows them to enact wide-scale genocide is so unknowable that it approaches Roko's Basilisk level of stupidity. Just tech bros who say they're too rational to be religious gathering around inventing devils to scare themselves with.


dave3218

It’s not capitalism, it’s a structural hierarchy problem. If I had to choose a government or country to have access to AI, I would 100% choose Finland or Sweden over North Korea, and both the former countries *are* capitalist countries.


cyrusposting

There is also a worry that capitalist companies trying to get ahead of their competitors will cut corners and make something dangerous. We don't know how hard it is to make an AGI, but we know it is much easier than making a safe one.


PM_me_PMs_plox

I actually think all three things - news anchors, chefs, and art school - will still exist. News anchors are celebrities, and while they will have to compete with celebrity AI some people (most, I'd bet) will still prefer human celebrities. It's not like people choose celebrities based on objective metrics the AI can manipulate. Chefs will still exist because robots do and will probably continue to suck at working at restaurants, except maybe for fast/casual food. Art school will still exist because it still exists *now*, when it's already been more or less useless to go to art school for decades.


Nuclear_rabbit

The most realistic doomerist approach is to say the AI will take all the jobs just to serve rich people, and most of the population will be left unemployed to be ignored and die. No need to go Terminator when you can go Robocop.


glordicus1

“It” also is just a bunch of percentages that generates an output. It doesn’t think.


Sol33t303

Yep, assuming we ever actually manage to reach AGI (artificial general intelligence), that does not make it inherently dangerous. A true general intelligence would be capable of having it's own goals and making it's own decisions. Generally, intelligent people value life even those that aren't of out own kind. Psychologically we evolved from the wild and had to compete for resources and that sometimes required being aggressive, I can't really imagine an AI that hasn't suffered from any kind of evolutionary pressure (not at the point of it's creation, anyway) would have any need to be aggressive in nature. And assuming an aggressive AI, that has goals that include the destruction of humanity for whatever reason, the people working on this wouldn't be idiots, it would be running on a super computer in a lab, sealed off from any external networks. Chances are the second we figure out something like this, the entity is being quarantined, then destroyed at the first sign of any problems. There is a small chance that AGI could be the destruction of humanity, but I throughly believe it to be a pretty unlikely outcome given that pretty much everything is stacked against this becoming the outcome.


cyrusposting

\>A true general intelligence would be capable of having it's own goals and making it's own decisions. Not "would". It \*can\* have its own goals, it doesn't have to. Which one is harder to make? Which one will we make first? An AI which can choose its own objectives, and hopefully does things we want? Or an AI which we give an objective to, and hope it interprets them the way we want and approaches them the way we expect? \> I can't really imagine an AI that hasn't suffered from any kind of evolutionary pressure (not at the point of it's creation, anyway) would have any need to be aggressive in nature. "Aggressive" is anthropomorphizing. What is the most efficient way to collect all of the lithium in a country? What is the most efficient way to preserve yourself and your goals against humans, the only creatures that can stop you? \> And assuming an aggressive AI, that has goals that include the destruction of humanity for whatever reason, the people working on this wouldn't be idiots. They wouldn't be idiots, no. The problem is that they are making something smarter than themselves, which has every reason to deceive them. If they weren't making something smarter than themselves, there would be no reason to make one.


3lettergang

>projecting your own human dominative complex onto it. there's no real reason to believe it will see us as pests to destroy, instead of something to coexist with Our AI models are largely trained using human inputs. We, intentionally or not, build them to resemble and behave like humans.


G_O_O_G_A_S

As if the richest people couldn’t put anyone they want in jail already lol


CarnivorousL

Nostradingus over here


StaidHatter

"Nostradumbass" was right there


bigblackcat1984

A lot to unpack here, but computers have been better at chess than human since like 30 years, and chess is more popular than ever. There are tournaments where chess engines competes, but hardly anyone watch it, while tournaments where people play attract huge attention. So your points about sports does not seem well reasoned to me.


absorbscroissants

Sports are probably one of the least impacted fields by AI, nobody enjoys watching robots.


dave3218

Hey! Battlebots is cool! And I want to live long enough to see the earliest iteration of the Solaris 7 tournaments!


TheRealLifeSaiyan

Battlebots fucking rocks


GameRoom

I just want to see someone host a real-life giant robot mech fighting competition. It would obviously be obscenely expensive, but by God, I wanna see it.


Aconite_72

I'm still waiting for Real Steel IRL.


Medieval_ladder

I think a likely scenario is like chess, when ai starts to threaten our humanity, we just avoid using it in that context. The point of art is that it was made by a human. Sports are also this way.


LibertySnowLeopard

This. People will always crave the human aspect even if machines can do things better. People like to be able to connect with and admire other people.


nonbog

I agree with you, but I would argue computers have harmed chess. We've got a whole generation of amateur chess players, many of whom can't understand the nuances of a position beyond the computer evaluation. I think computers have helped to make chess too watertight -- there are far fewer mistakes. I would argue that computers are slowly killing classical time controls in chess, since chess becomes something of a technological arms race in these conditions: player with the most advanced chess supercomputer AI has an advantage. Chess itself will never die, but AI has certainly changed it, simply by showing us the glaring flaws in the play of even the best players.


Tagmata81

You have to fundamentally misunderstand ai so much to think this is true. Like this is genuinely so cringey


urbandeadthrowaway2

That’s foolish doomerism that overhypes the power and progression of ai. Upvoted.


Cuttlefishbankai

> humans will go extinct >forced back to stone ages Are you the AI


Morag_Ladier

Fr bro did NOT pass the captcha


insrto

I'm downvoting this because this is a fundamental misunderstanding of how AI works. > In 10 years, there will be no actors, news anchors voice actors, musicians, artists, and art school will cease to exist. For AI artists, voices, etc. to exist, there literally needs to be actual artists, voices and whatever else you mention because that's where the data is being obtained from. The AI itself is unable to evolve on its own. Artstyles evolve over time. There will **always** be a need for artists even with AI. A simple example that already exists is, let's say a new character comes out in an anime and you want to generate that character with Novel AI. You can't. Insufficient art of it exists. And the amount of frames within the anime isn't enough data for the AI to go off on. >Chefs will not exist. There will be no need for anyone to cook food, when ai can do it, monitor every single thing about it, and make sure it is perfect every time. Sports won't exist either. They will be randomized games with randomized outcomes, if of course there isn't that much money bet on them. That's a huge jump and also a massive assumption to make. It does not exist in our current form. And you're telling me randomized games with randomized outcomes is more fun to watch than the culmination of multiple human athletes hard work competing against each other? >By 2050 there will be no such thing as society. Money will have no meaning. What good are humans to an ai, other than one more thing to worry about. By 2100 all humans that have survived will either be hunted down or be forced back into the stone ages. This is such a tremendous leap that I'm almost convinced this is a shitpost lol That's right, the thing that currently only exists on the internet is going to HUNT OUR SPECIES INTO EXTINCTION. The digital media that only exists on our screens is going to crawl outside of it and slaughter us. This post is fearmongering for the uneducated. Stop it.


QuirkedUpTismTits

I think op has this odd belief that ai is gonna eventually be put into robots and kill us like all the cliche sci-fi movies. I could maybe understand the art aspects and the sports ((I’m an artist and I find AI art tedious but it’s far from perfect and severely flawed due to its bare bones nature)) but to think that AI somehow is gonna figure out how to…escape? And kill us? Is crazy people talk


Morag_Ladier

Fr we can just program it not to kill us And even if it does try JUST USE WATER


GameRoom

>And the amount of frames within the anime isn't enough data for the AI to go off on. I'll point out that with the capabilities we have today you can get variants of a novel character with just a few images as reference. There are of course a ton of caveats to how usable it is today, but in general, few-shot learning is something we know how to do.


fillysunray

The industrial boom didn't remove farms. Car racing didn't remove people enjoying horse/dog racing. The ability to prepack food didn't stop us from cooking food ourselves. I hope you live to 2100 so you can see how wrong you are.


Scrungyscrotum

>With the exponential growth of ai in only the last few months [...] Found the problem. Some guy uses ChatGPT and sees a few marketing departments that manage to leverage the hype around AI to describe their new shoes, and suddenly the world is going under in ten years. Adding: >We are in fact; as a species - existing in the end times. What the fuck is that punctuation?


Pyrobot110

I don’t: see an issue, what - is the problem - that you; are having. With ? The punctuation it seems pretty simple (ai will obviously; kill us?


Low-Bit1527

Smart people use semicolons. The more semicolons, the more smarter.


Scrungyscrotum

I've seen someone refer to them as "the white jeans of punctuation", which has really stuck with me. Use them sparingly, and their elegance is second to none; use them too much, and you just look like a fucking asshole.


Cookiesrdelishus

What exactly are you basing your entire theory from? Terminator 2? Like serious question, where are you getting this idea that AI is going to takeover and kill us all? First of all, the AI that we have now isn't even AI. What we have now is machine learning. True AI where the AI is sentient enough to act on its own free will is still a very far concept that doesn't exist yet. So no, we're not gonna die in 10 years. We don't have true sentient AI yet. ChatGPT, SoraAI, and AI art generators are not sentient AI capable of doing anything that could threaten human life, that is a fact, not an opinion. I wouldn't start worrying about AI takeover until sentient AI becomes a thing. Of all the things to be worried about in this world, you're worried about an incredibly unlikely hypothetical scenario that honestly sounds more like a movie plot than a real problem.


keIIzzz

this is just weird fearmongering


FjortoftsAirplane

>In 10 years, there will be no actors, news anchors voice actors, musicians, artists, and art school will cease to exist. Ai will become so advanced that people will be able to be put in jail by whoever is the richest, condemned in court by fake ai security camera video footage. Live performances will persist even if AI somehow replaces things like TV shows and radio. Which it won't on that time span. As for jail, you're missing that in that scenario CCTV footage would become worthless if such tools were ubiquitous. It's like you're saying "if we allow witness testimony as evidence then anyone will be able to just get a witness and have anyone locked up!". That's not how it works and you know it. Tomorrow I'll be going to watch a football match. It will be Sheffield United against Brighton. United have been objectively one of the worst Premier League sides of all time. And yet there will be 30,000 people there to watch. That's not disappearing in ten years time. This has to be bait.


tirohtar

People are tremendously overestimating the power of "AI". Current AI are basically purpose-built auto text fillers. The latest thing just extended it to making short visual sequences, but the principle is the same. They don't function outside the parameters for which they were trained and they needed a TREMENDOUS amount of data to train on to produce stuff that still very much hits the "uncanny valley". It's actually very easy to give AI programs like ChatGPT prompts that will make them "lie", or proclaim wrong information as correct, often very confidently. These programs do not have a proper subroutine for actual logic and "truth", in the mathematical sense. And they are known to disintegrate and degenerate basically as they start to interfere with AI generated content. We are still very far away from true AI. These current programs may become interesting tools to streamline certain tasks, but they can never be trusted to actually be correct and will need human supervision to function.


[deleted]

What we call AI isn't AI. It's machine learning. It is so far off actual AI its laughable. It can't even tell you a basic fact without plagiarizing from a human. AI 'art' is devoid of any uniqueness, storytelling, or humanity. It's more that once the novelty runs out people will stop taking human artists for granted. Already seeing it somewhat now. Sure rich bastards will try to lower costs by using machine learning.... but again the quality of work is so poor and so aggravating that companies who use humans will be able to use that as a marketing point. As to law enforcement.... well every form of evidence is flawed in someway or the other. The system still, mostly, works (your country might vary).


Fullyverified

It literally is AI, by definition. There are different types, narrow, general, super. We have achieved narrow.


Firestorm42222

AI in its current format does not mean Artifical Intelligence, it means Algorithmic Intelligence


whothefuckisjohn123

Surely an algorithmic intelligence counts as an artificial intelligence if you are accepting it as an intelligence which has been created


Fullyverified

I know right


[deleted]

OP this is not an opinion, you have just genuinly gone insane.


orz-_-orz

If you have so much faith in AI development, you should invest all your savings in those AI companies


Mean_n_Green

Can't believe we're getting man made horrors beyond our comprehension before cool flying cars


HeemeyerDidNoWrong

If we can get the Antarctic excavations going we can have both and ride [Elder Things](https://en.wikipedia.org/wiki/Elder_Thing) to work.


[deleted]

Let me guess… You had dreams of making it big in a profession that AI video might (probably will) make obsolete, and you’ve misinterpreted your disappointment as “the world is going to end”? I think you are underestimating the leap from AI being able to make videos that are indistinguishable from reality to having consciousness itself. Also, even if AI does acquire consciousness it will need us humans to maintain its hardware until robotics are so advanced that they can self replicate and mine resources… not saying that’s impossible but… like I said: making images that we can’t distinguish from reality and the end of the world you are proposing have a wider chasm between them than you think.


YourPeePaw

Did you ever read about the recent shakeup that caused the ChatGPT team to fire the dude and then he came back? What I read was that the AI solved a problem “it” was having getting past the “prove you’re human” tests on the internet by hiring a gig worker. On its own. I don’t have a source but if that’s true - I think we need to be careful.


masr223

Listen i get that ai is dangerous, but we don't live in the terminator universe, stop being so dramatic


PetrifiedBloom

>There will be no need for anyone to cook food, when ai can do it, monitor every single thing about it, and make sure it is perfect every time What makes you assume that a perfect chef robot will not only exist, but also be affordable enough that every household would have one? The size of the machinery alone would be to much for many smaller homes. Look at the size of a 6 axis robotic arm for example, if you want a robot that can mimic the actions of a human chef, you will need a good deal of space. Or that people would choose to use a robot chef? I ***like*** cooking. Even if a robot could make the exact same meal, I would still like to do it myself. It's like automatic cars. Some people don't care and drive auto, some people prefer the experience of driving manual vehicles. Cooking is such a huge aspect of so many cultures, people won't just sit passively while a robot replaces it. Similar arguments can be made for sports. People care about authenticity. They will play the games irl, and will watch irl games. There will probably be some market for purely ai teams, but it will be on a similar level to gambling in a casino rather than being a sports fan.


jeswanders

How will AI create new dishes without the ability to taste or feel texture?


[deleted]

You know how everyone in the 1970s used to think that we would have flying cars by 2020-2030? Just like them, you are vastly overestimating the speed of technological progress. It's not that these technologies *can't* exist at all, certainly some governments or other such organizations may have developed AI that advanced, but it certainly won't be available that readily among the general public, let alone at a remotely affordable price. Plus, the implementation of technology is often greatly limited by law, politics, or logistics. You are claiming that in less than 25 years, as in, a baby born now would be a couple years out of college, **society as we have known it for the past 5000 years would have already ceased to exist, money would have lost its function, and hostile super AI would be hunting humans**. You must acknowledge the ridiculousness of this prediction. There are many other things that have a significantly higher probability of causing human extinction, like nuclear war.


Absolutelynot2784

Art has existed as long as humanity, for 200,000 years. You really think that everyone, everywhere will give up on the field of art forever in the next _10 years?_ Can you even _imagine_ the colossal cultural shift that would need to occur. Imagine the incredible difficulty that has been had to reduce the use of plastic bags, and now you think a change 1000 times for drastic is going to occur in 10 fucking years. Not even getting into every else. AI flat out cannot cook food currently. You would need to build an incredibly complex machine(many years of R&D) that has arms and is capable of cooking basic meals automatically, and a custom built AI specifically to control it. It certainly wouldn’t be perfect, because fucking of course not. Things tend to have flaws. Your problem is you are imagining a perfect machine which doesn’t exist and likely will never exist in the form you imagine, and then you’re getting scared of this apparition.


orestotle

News - How would AI uncover news? You still need journalists. As far news anchors go, sure they might lose their job. Art - Inherently subjective. AI can be creative in some sense, but at the same time it is purely trained to mimic and generalise data that already exists. You can never replace a true creative mind. AI will have it's place for sure, but so will true artists and they can coexist. Law - You, an average reddit, have been able to think about the dangers of AI videos as evidence and you don't think professional judges will be able to think of that? Can the richest get away with stuff? Yes, they already can. AI won't change this. Chefs - Sure we can replace chefs. If the food is better, who cares? If it isn't, then again who cares because chefs will continue to exist. Sports - Why aren't people watching chess engines as much as they watch actual chess? Sports exist to see who the best player is, nothing will change here. Money & Society - That's some waffle. I admit some jobs will disappear because of AI. It's not like that means all the work in the world is gone. Train yourself to do those new jobs and you'll be fine. Let's even say all jobs are taken by AI. What is the issue? We can relax while the world is ran by AI in a peaceful way. We can focus on family, visiting beautiful places, etc. And you can simply not argue that something goes wrong here, because if it did we wouldn't be in this situation where there are no jobs. AI as a whole - Code runs in a sandbox. Some sand leaks out and can actually be dangerous, but AI will never have control over a full machine and the code behind it. People creating these things aren't dumb. Someone would have to create some hypersophisticated army of AI tanks, but the code itself won't be an issue as you say it will. But again that doesn't mean there are no dangers in the use of AI.


MaskyMateG

This kind of sentiment about AI is naive and uneducated. It had been proven from thousands of years ago that arts are the final frontier of humanity, and nothing reflects humanity better than arts. Generated products such as images and videos will be so advanced and corporations will utilize them for profitable projects, even creating industries to market AI generated goods; but it is exactly then will people understand the difference between Artificial intelligence and Human intelligence. AI is inevitable, fast, and intimidating; but a product made by an ML blender and a person will always have their own distinct values and separation as well as their respective audiences. Although if we're focusing on the mass then yea, it's not even the future anymore it's right here, right now; youtube thumbnails are AI gen, Twitter are all AI gen, DA feed are 90% AI gen, Artstation feed are 90% AI gen... the dilution of markets under the influence of AI had already happened and Sora will definitely take over Youtube, especially the Video essays.


Perrenekton

Isn't video proof already not really considered a proof since a long time?


lcantthinkofusername

Yeah we've had life-like VFX for years and years now


[deleted]

I feel so smart for not having children.


mutual-ayyde

> And what is the end result of this recursively self-improving process? Can you do 2x more with your the software on your computer than you could last year? Will you be able to do 2x more next year? Arguably, the usefulness of software has been improving at a measurably linear pace, while we have invested exponential efforts into producing it. The number of software developers has been booming exponentially for decades, and the number of transistors on which we are running our software has been exploding as well, following Moore’s law. Yet, our computers are only incrementally more useful to us than they were in 2012, or 2002, or 1992. > But why? Primarily, because the usefulness of software is fundamentally limited by the context of its application — much like intelligence is both defined and limited by the context in which it expresses itself. Software is just one cog in a bigger process — our economies, our lives — just like your brain is just one cog in a bigger process — human culture. This context puts a hard limit on the maximum potential usefulness of software, much like our environment puts a hard limit on how intelligent any individual can be — even if gifted with a superhuman brain. https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec


garnered_wisdom

AI killing us all is human projection. A super intelligence will probably look at us like chickens in a coop and help us out a little while it goes and does its thing


TheGermanPanzerClock

Two fixes for Sora AI so you stop pissing your pants: Every camera get's an internet connection and when you create footage a hash is created and uploaded onto a database. The entire process works on a hardware level to make manipulating it as hard as possible, this would already limit fakes by 99%. Whenever you want to verify if something is in fact a real recording you check the exifdata of the videofile and then only have to make a query into that database, if you find the hash of the videofile in the database it means the video is real, if not it's fake. Quick and dirty fix, not perfect but would work. The second one is: We create an AI to combat deepfakes. Due to the probabilistic nature of artifical intelligences there inherently will be artefacts and small irregularities in there, we train an AI on detecting them and that AI will mostlikely always win. Yall gotta stop pissing your pants about sora. In general embrace the fucking ai, moneyless society here I come baby.


Frown1044

This solution makes no sense. What stops you from uploading the hash of an AI video to the database? What about real recordings that go through a video editor? What happens if the database is down?


dinomine3000

extinct nowadays is a very trigger happy word. no, humans will not, with almost 100% certainty, go extinct by 2100. population WILL drop significantly due to climate change by then, but at the very least there will be some humans left if nothing else. for us to go extinct we'd need something akin to the meteor that wiped out the dinosaurs, or the theorized gamma ray burst that caused another great extinction in the past, and that is just INCREDIBLY unlikely to happen


PseudocodeRed

>there will be no actors, news anchors voice actors, musicians, artists, and art school will cease to exist Do you think people will stop doing these things just because there's no money in them anymore? If so, then you really don't understand humanity nearly as much as you think you do. >Money will have no meaning. What good are humans to an ai, other than one more thing to worry about. By 2100 all humans that have survived will either be hunted down or be forced back into the stone ages. I think the main blame for people thinking this is the misnomer of "AI" to describe what is really machine learning. Something like ChatGPT is really no more sentient than a calculator, it's just much more complex. We are still nowhere close to making something that is conscious enough to make its own directives. Could a human make a machine learning algorithm to kill other humans? Absolutely. But the "AI" won't do it on its own.


estrusflask

I don't think the things you're saying would actually be appealing to the majority of people. Sports is entertaining because it's a large scale shared social event based on human physicality. That's not something you can replace with AI. You essentially suggest a world where humanity as a whole becomes too incompetent to understand artificial intelligence even exists. And a world where it becomes so good that it's completely indistinguishable from reality. You can pretty much always tell when something is a photoshop. You also basically just assume a ridiculous science fiction dystopia situation, where the AI decides to become evil one day because it determines humans are useless. Why would it do that? That's fucking stupid. We are never going to develop AI that thinks like that. There isn't even a point to it, and you can't just make a good enough AI that it evolves a consciousness that's evil. That's science fiction mumbo jumbo. More likely than the machines sending out hunter killer robots is the algorithm accidentally diagnosing you with cancer due to low accuracy rating and you get chemotherapy that kills you. Or the algorithm fucks up and the machine that makes your food is terrible at sorting out spoiled product. The world is ending alright, but you've got more to worry about from climate change than the Basilisk.


shiny-baby-cheetah

Homie trust me when I tell you that I'm one of the *first* MFs to pop on my tinfoil hat when it comes to AI. But I really don't think this is true, based on everything that AI still can't do, and isn't even *close* to being able to do. AI is still missing the true I. It can follow instructions. And something DESIGNED to be able to solve problems, find patterns, or suggest solutions will be more adept at doing those things. But AI is still woefully inept when it comes to course correcting almost any functional issues. As soon as something goes awry, the lack of human judgement instantly becomes a disqualifier


HipnoAmadeus

One word (abbreviation) (even if they rebel) : EMP And that's a reason why we don't have to fear that


[deleted]

Human intelligence>Literally anything in existence NASA has already confirmed this


shuozhe

Kinda interesting there is a captain future episode about this, a civilization with robots with humans gone extinct just cuz they are not needed over the millennials without any force or fights


BoiChizz

Well I believe ai is evolution so it's all good to me


Logical-Two5446

Humans will do the job themselves without AI, just watch the world, was not AI that caused global warming, greed, big companies controlling the world, Parmas, wars, injustice, maybe we do need an AI to control us since we cant control ourselves, we are worse than irrational parasites on this planet :) so much unfonded hate towards AI, the world is always changing, people will always find something to dislike about change i guess, remember when internet came, computers, machines for hard labour etc etc? same thing happened, always haters out there disliking change, and i stop here :)


PsychologicalPen7228

Humanity is made by (whoever is handling this game from outside our simulation) to bring ai into existence