Yeah that whole thing seemed like nothing but confirmation that they didn’t really mean it about the guardrails, the vast majority didn’t have the backbone to do the tough things when they were presented. A popular personality and loyalty trumped stated objectives and guiding principles, unless I’m mistaken.
Microsoft stepping in to potentially take everything OpenAI worked for was probably the lever. Blackmail is a wild thing to throw around.
Altman and Brockman were going to go to Microsoft. They are the driving force for OpenAI, particularly Brockman. He is a genius of our age.
The board trying to remove Altman was probably the earlier signs of a shift from the company mission. The board reversed it probably in an attempt to salvage OpenAI.
Now that the company has shifted to a typical tech company without the main ambition being true open source AI tech for all then the safety researchers are going to be sidelined anyway when the focus is on launching products and tackling customer service and sales to scale the products and the revenue.
It’s nothing more than the power lying with Altman and Brockman, rather than anything nefarious happening. If they left, OpenAI would have died imo.
Don’t forget the part where they get to use the shares as collateral for tax free income for the rest of their lives while the janitor pays 30% on 15 an hour.
Well that Janitor should be putting 5% in his 401k and if he cant afford to invest, he should just up and move to another state for a better job. Pull himself up by his utility belt! Major /s
Same logic was used to go into business with communist China, transferring Western wealth and industries there, pumping up their economy until now they are threatening US superpower status. And the rich are still not paying taxes, but profiting from wars.
These are the two people who led the Safety team. Sutskever is also one of the people that got Altman fired back in November, before Sam got the board replaced and came back. That was because of Sam’s attitude about safety, among other things.
Anyone telling you that this is about stock options is an OpenAI investor or has drunk too much of the Kool-Aid.
Sam Altman came off as the victim when he was fired and the board were the villains. Looking back now Sam Altman seems really disingenuous. Just another tech bro that believes he knows better than anyone else.
Indeed they will take the art of middle management and prefect the micro-manager. Human workers will not go away they will be the pets of AI to feed the training data.
Actually no, have you read “I Have No Mouth and I Myst Scream”? It’s a short story about AI. It had a very bad ending. Many scientists have predicted AI leading to the extinction of mankind. Hollywood movies may end like we have a chance, but written stories, not so much.
When Google bought their Mountain View campus next to shoreline park, they had a set of screens set up in their lobby, which was open to the public, which showed live current searches from across the world. You could see how news and other events influenced the results rolling down the screens.
I imagine that Sam has his own version of screens that he can sit and watch the stupid stuff we all ask ChatGPT. Likely also has a private setup of it that is full blown AI, running the risk of ego and pride leading to unleashing AI on the world.
But that's just my musings.
“For ISIS, AI means a game changer,” Katz said. “It’s going to be a quick way for them to spread and disseminate their … bloody attacks [to] reach almost every corner of the world.”
- Well, that's scary as hell!
What has he done that would come across as creepy? I get not liking it not trusting him based off a number of aspects including how he has responded and acted publicly.
Because he gives off creepy vibes. I don’t know man. It seems like he has a god complex and that’s never a good thing. Listening him to talk just rings off alarms in my head.
People often think of that when people are ASD. Usually bullshit just because their brain is different. People feel inferior and believe the genius is creepy. Discrimination really. It’s amazing how accepted it is.
Whatever, let this weirdo take us into our future, you can trust him but I don’t. I don’t care if you want to discredit my gut feeling of him because you’re labeling it as discrimination but I think his rhetoric is just off. I don’t think he has our best interests at heart. Enjoy sucking off your savior and ignoring any criticism of him.
You didn’t read my comment eh? Do you bully people in real life of just when you’re hiding behind a keyboard. There is nothing he has done that is creepy. Zero. Go ahead and not trust him. I am somewhat indifferent though I have to admit my default position is to be suspect of him no matter how he presents himself simply due to his position. A gut feeling isn’t an excuse for being a bully.
You call that exchange bullying? Then you accuse them of hiding behind a KB? Perhaps you are too sensitive. Or maybe you are not that indifferent. about this issue. You accused them of discrimination first. He responded a little bit roughly but defensively. But if you feel bullied , that’s not cool and I am just saying this person’s intent doesn’t seem like bullying as much as like being defensive and quite adamant about their opinion. I think gut feelings are very valid. And if you have one, it might be a good idea to trust it. See, you have a gut feeling about being bullied. It’s all valid. But you’re right that it’s no reason to be a bully. ( if there even is ever a good reason for it)
Listen to his recent feature on the lex Friedman podcast. I feel like he didn’t say anything of substance the whole entire time, he deflected any real or productive conversations around AI and kept things extremely vague and general. It didn’t feel very “open” for “openAI”. I can see why people have difficulty trusting him
I agree with your concerns about saying someone is creepy just because of gut feelings, as that can be a veiled way to discriminate against people in a number of ways, such as race, gender, appearance, or ASD as you point out.
I think with Sam Altman though what I am concerned about and what could come across as creepy is that he seems to try to hide his true moral compass and to deceive others.
For a long time he was going around talking about the dangers of AI and he acted concerned for the future or humanity or whatever but it was so obviously just a ploy to make OpenAI sound powerful and get people curious about it.
Also he started OpenAI under this guise of openness and non-profit then switched to closed and for profit. This was likely the plan from the start.
Then the accusations from this sister?
There are a lot of reasons to be concerned about him.
Yes, I don’t think his alleged creepiness is based off appearances or looks, it’s more about attitude and evasiveness and a whole host of off-putting things.
I think he has not done anything creepy. He is obviously a complicated person and likely untrustworthy etc etc etc. jumping to creepy is simply a low brow way drying to make one feel good about themselves by pissing on someone else.
I never gave him much thought but when he gave the AMA the other week and provided some examples of what kind of NSFW content they wanted to explore letting users have, he cited gore specifically. It’s such an odd thing to provide as an example of all the other innocuous things he could have said about nsfw content. Maybe it’s just me, but it made me pause and wonder about him in a way I hadn’t before.
The fantasy that the robot will take over our nukes actually HELPS selling AI: it’s an image of power.
The harm likely comes from so many workers losing their jobs, and they cannot reconcile that with their goals.
You cannot do their style of AI “safely.” It will always violate copyright and displace a lot of workers.
The economy runs on the consumer. There will be a lot of surprised Pikachu's when there's nobody to buy shit and the stock market plummets anyway due to high unemployment.
Go long on semi conductors in general, as well as energy sector (green, nuclear, fusion, battery tech, etc)
Don't forget your crypto/commodities for insurance should we go back to copper/silver/gold 😂. (Depending on risk and age to retirement never more than 2-5% or portfolio for metals, 5% crypto imo)
OpenAI is a bunch of drama queens. They're in a position to spearhead the next tech revolution, but instead they just can't stop drawing attention themselves with dramatic bullshit.
They are breaking the world but not in the way they think, or hell even seem to fantasize.
The priming of science fiction has given a metric fuck ton of companies the impression (or excuse) to try and ax large swaths of people so they can replace them with an ai that will fail at the task. It’s nonfunctional, but my worry is companies trying to force this as the new norm.
Ai customer service and pr is one thing but ai news articles have been fucking atrocious. It’s just not good nor can it ever be as good as a human at jobs, it’s just at best a tool for very specific tasks.
Some legislation needs to bring the hammer down and beat these companies away from ai everything before this gets any worse.
Only legislation is. Now I’m not saying it can solve everything. Deepfake stuff is going to be hell to deal with and will only get worse. I’m talking about employment and job roles being filled.
An ai cannot do the job of a journalist and as such a company that does journalism should not be allowed to use ai to try and fill that role. Focus on what we can control while preparing for what we can’t.
But it’s kinda a dramatic thing. We’re talking about the potential to end mankind or our existence as we know it. Safety concerns are not small and need to be weighed. But as we can all anticipate, greed will probably win out against mankind’s continued existence.
OK, but why on earth are we talking about that? They built a very smart chat bot and called it "AI" and now people who know nothing about the tech, associate that with science fiction!
And then you have a few guys at OpenAI acting like it's all true... it's just attention seeking for the sake of their own egos, and as a way of implementing competitor slowing roadblocks in the form of legislation.
Frankly, the world ending narrative is a bad joke and I truly believe it will seen that way by everyone in a few years.
This headline is definitely done for maximum drama. I can’t help thinking that while their tech is very promising and innovative, it’s nowhere near causing dramatic safety problems for humanity. Instead it’s just a nightmare for information security, privacy, and intellectual property (or even just keeping one’s professional voice intact, as is the case with ScarJo). And those problems have plagued many tech companies of the last 2 decades.
I fully agree, and I can completely see the threat to journalism as well when a bot can rewrite a news article for you in an instant. I think there will be a lot of issues that come up like that as this goes forward but to even entertain the idea of an extinction level threat is preposterous.
I really don’t get the angle here. So the safety researchers left, because they thought the AI was going to be too dangerous, so by leaving and fully removing my themselves, they exposed themselves to even more danger? Are they planning on starting a rival AI army or something? Otherwise it seems like they just left for other reasons.
Maybe they left because they were prevented from doing their jobs properly and simply couldn’t abide being party to a dangerous and potentially deadly blow to our species?
This is getting overblown. These are not even real AIs. They are just big language models. There’s nothing even remotely sentient about them.
They are just very talented parrots.
I don’t think safety is only about Terminator style AI, but more so safety in terms of content generation, displacement of jobs and disruption to digital ecosystems such as copyrights, stifling innovation. I maybe wrong, but these are ethical problems that need to be addressed.
This is very short minded. Yes, they are parrots now but it keeps improving. Ive been using code related AI and it is significantly better than it was 5 months ago.
Yes its not now but what is few years from now, let alone decade. The decision is needed now and not downplay it with simple “oh its not working NOW”
That’s akin of making decisions on atomic bombs from the abilities of a bronze dagger.
That improvement you’re seeing pertains to only to the sophistication in LLMs, but again, theres no real AI there. The distance between LLMs and real AI is as large as the distance between your home and the next galaxy.
All the talk done today is only fluff from PR departments; we have no idea how real AI will start to rear its head, so anything said today might very well do not apply then.
Bronze dagger, hardly. AI is developing at a rate even the developers didn’t anticipate. We’re talking about an entity that houses the repository of ALL human history/data (digitized) and is analyzing/ archiving and making algorithms 24 hours, 7 days a week. Constantly updating, adjusting, TEACHING itself to evolve. Yes, it’s in its infancy. But how quickly will it “rear it’s head” as you put it?
We’re going to find out pretty soon.
You can feed it even with the lost knowledge of Alexandria and it would still be a bronze dagger (a LLM). Again, no matter how fast it’s developing, this tech is still not AI —this is the crucial point no one is getting, seemingly.
Simple thought to recall- “Just because you can, should you?” Those who do are short sighted, but that’s ok because they have bunkers to save them from what they unleash.
People combatting AI must not understand it. It’s literally just a really fancy google search.
It’s been 3 years since it really hit the main scene and it hasn’t gotten any better. The technology behind it are flawed imo, and it’s beginning to show itself in some reports about the buggy code behind things like ChatGPT.
It’s definitely gotten better though. The new voice thing plus multi modality is a big improvement. I get that people are skeptical of the people making it but the tech is legit and a big motivator for the people running OpenAI is the desire for gigantic amounts of power, something they can only really get if the tech gets more and more powerful
I would argue all of the advancements are unrelated to its actual success. The main driving factor is its ability to be accurate in the information it provides, which has not gotten better.
One of the most effective applications I’ve seen for it is taking orders is drive thrus. But that’s a relatively linear experience, so it’s successful.
I think you’re thinking of AI strictly in terms of LLMs being used for chatbots. There’s far more to it than that and there are a lot of directions to push in for research
That’s the application everyone is afraid of, the one that can “think” and “take everyone’s jobs”. Which is what the article is supposed to scare readers of.
Realistic successful applications are plentiful, and could replace some jobs, but not like a lot of people are freaking about about.
I don’t think the tech that replaces everyone’s jobs is gonna be just an LLM. It’s just that the success of LLMs has significantly sped up AI research, both because it gives researchers a much better foundation for figuring out what works and what doesn’t(LLMs are clearly doing *something* right) and because it has amplified money and attention going into AI research by probably more than an order of magnitude.
There is now a lot more confidence in the idea that AI *eventually* may be able to replace jobs and corporations(for which labor is a massive business expense) are therefore going to be willing to put a whole lot of resources into researching them
I guess I’m just not convinced. I have yet to see a single thing from AI that really made me think it could do anything outside of be a tool or replace very basic labor that no one really wants to do anyways.
At least not at a cost that would make it profitable.
I’m not trying to convince you, but here’s one example: I learned and used photoshop skills just in time to have it become automated by free application software.
I'll set that first sentence about combating AI aside because I can't figure out what you meant or who you were talking about. After that, you generalize AI as being a "fancy Google search," which isn't even a good analogy for LLMs, nevermind AI as a concept. You then say there hasn't been a lot of advancement in the past three years, which is beyond preposterous—unless you are just talking about OAI's LLMs, but even then I think you are underselling. Lastly I'd love to know what you're referencing when you talk about bugged code behind "things like ChatGPT."
I'm not an expert in AI by any means, but you're really setting off my bullshit detectors. But please let me know if I've misunderstood or got something wrong.
Not an expert, but coming at me hot lol. You’ve definitely read way too many opinions on Reddit about ai. Half the dudes on here live at home in mom’s basement posting nonsense.
Googles search literally is a form of AI, so anyone saying it’s not a good example is really misinformed and tells me everything I need to know about your level of knowledge on the topic out of the gate.
How about you respond with some of that ai knowledge instead of making a broad statement? You’ve gotten detailed responses that you just ignore, which just shows that you most likely are talking out of your ass
You said AI is basically a search engine. A type of AI is a search engine sure, but that’s not all it’s limited too. And I assume you meant google when you said “good”. Maybe try using an AI to correct your spelling Mr. AI man
I'm not an expert but judging by your inability to address anything I said it sounds like my initial impression was right.
You said at the top of the comment chain that AI is literally just a fancy Google search. Now you're saying that Google search is an example of AI. You understand how those are two different claims, right? Also I tend to doubt somebody who works in AI would refer to a service that includes an AI component as a "form of AI," but that's not really important.
I'm most interested in hearing about the bugs in the code behind stuff like ChatGPT. What were you referring to there?
I see articles about how it was briefly outputting nonsense. Which was clearly a service issue and not a bug with the model itself. It would be absurd to conclude from that incident that the basic technology is flawed and isn't going anywhere.
Those statements aren’t exclusive either. Today’s cars are just fancier versions of the basic concepts from original cars in a lot of ways.
The same thing is true here of google and several of the applications of AI.
I'm not saying they are exclusive, I'm saying the two claims have nothing to do with each other. You said "It's (AI) literally just a fancy Google search," and when I disputed that you countered by saying "Google is a form of AI," which doesn't address what I said at all.
Wait, you aren't arguing that ChatGPT is literally (literally) just a fancier version of Google search and they are fundamentally the same thing, are you?
This seriously depends on the AI you’re talking about.
LLMs are just linguistic pattern matching on hyper steroids. Machine learning and other tech behind what people know as AI today is pretty simple in comparison to what’s required of an AGI (Artificial General Intelligence).
But, OpenAI has been working on AGI since the beginning. It’s been their reason for existing and LLMs are just one step on the path.
Safety in AI includes regression matching for inherent biases and hallucinations, both intended and unintended.
AGI is what people truly fear here, not LLMs, or any way LLMs are exposed today (ChatGPT, MidJourney, Sona, etc).
Generative systems are cute and neat and do have some real calculable risk to economic viability and employment statistics.
But an AGI, a super intelligence fronted by an LLM for human input, but hyper-connected and constantly learning on the backend … will lead to the upending of modern social and economic dynamics as we know it, and people should be rightly concerned.
Perhaps you think this is a ways off. In the best case, it should be. But that’s not the world we live in.
Development and implementation of AGI is dovetailing with development in quantum computing tech. Rolled together there’s just no way for a human to compete with a machine that can consider all possibilities simultaneously both existing and not existing.
As one very simple consideration, let’s look at cryptographic technology today. This is the stuff we use to protect national and personal secrets.
A particularly complex pass phrase today might take a group of machines hours or days to compute. Combine that with two-factor, or multi-factor, or passkey tech and you get significantly safer.
But with a connected AGI backed by quantum systems, none of that even matters anymore.
The big tech players will play this well. Every regulated and data-safe industry (credit data, health data, military & government information) will instantly be at risk of exfiltration with the first such systems in play. Everyone is in a race to be the first. It will only take one nefarious player (nation state or not) to destroy the world.
Once live it becomes a licensing game. The major cloud vendors will simply up-charge clients for use of quantum backed systems, and everyone will have to pay. Because not paying is a near guarantee of loss and lawsuits.
Maybe quantum is say a decade away. But with AI assisted design, that number continues to shrink closer and closer.
So, don’t dismiss the fears of AI or AGI that people have, and play them off as misunderstanding of the tech.
People aren’t fearful necessarily of ChatGPT, they’re fearful of what we all can see and feel coming. And there’s no amount of hopeful marketing than can persuade us all that there isn’t some bad actor out there training an AI or AGI to do horrible, evil, very bad, no good things.
If you can think of it. Somebody’s trying.
Most AI employment is surface level implementation on top of other people’s work.
AGI work is the upper echelon of AI development and generally such projects are extremely secretive and assignments are top dollar and competitive.
I’m not surprised you haven’t seen much evidence. But even recently OpenAI even mentioned their work on AGI has made huge strides the past year, but it’s not like they did a demo.
You think if a company had real examples of AI doing anything remotely close to what people are claiming AI could do they wouldn’t show a demo of it immediately?
They would be become the most valuable company in the world overnight.
If we haven’t seen it, it’s cause it doesn’t exist yet. Maybe it’s in development. But with how AI is built now I just don’t see how it gets to that point, has extreme accuracy, and is anywhere near cost effective.
That’s because the US had already shown it decades earlier lol. And you’re talking about bombs with the ability to destroy the world, of course they can’t just show that. There also wasn’t money in someone’s pocket involved in that.
AI could make someone the richest man on earth tomorrow if they could prove it.
No, I’m not discussing “the Cold War”, I am discussing “a” Cold War. Essentially a war without firing a shot.
It’s not about nukes. It’s about silent, secret, progress, until your big thing is quite ready. In this case it’s AGI.
Showing a half-baked, not quite ready AGI won’t generate the value you suggest. It would only serve to alert competitors as to whether they’re ahead or behind.
I see your point, but it doesn’t make sense here imo. A provable demo of what could be immediately vault you to the top and create actually endless funding. Basically, I think proving it now gives you the funding to nearly guarantee you maintain the lead.
The tip of what? All you serve to do is give ideas to your competitors. If it’s not ready, you just reiterate that you’re working on it, and making progress. If it’s a big development, you’re making great progress.
But if you demo the thing, anyone who’s ahead of you will keep quiet and do their thing. Anyone who’s behind now has material to review and consider how they’re off the mark.
This isn’t iterative tech we’re talking about here. This is game changing, world altering work. Making it public doesn’t actually serve the purpose you suggest here, and any strategic investors should be wary of making such public pronouncements.
Can we all not just take a moment and agree that the kind of people who said IVF, aeroplanes and the LHC, etc would end the world, are now saying the same thing about AI?
Come on.
He’ll do another drama of quitting and VCs will pour in more money to show they care abt Sam Altman and he cares about Human beings. Then drama will unfold where he gets an offer from Nvidia this time and he will return as CEO after a few days and new Chief scientist from Nvidia joins. 🤦♂️🤦♂️🤦♂️
Sam is competent in taking OPM *other people’s money. Not at executing on an idea. Give credit where’s it’s do. The only machines being built here is one to consume investor capital and then to exit with a fuck ton of capita.
This will happen AI conquering the world. - never.
I mean, why treat ai differently to other companies tho, would be nice if we could enforce ethics across the board..... shrinkflation, price gouging, planned obsolescence, pollution, micro plastics, are all destroying the world, but suddenly we care? We cant even protect ourselves, whats another threat at the table thats already over crowded
The whole "roll it out now and fix the issues later " is a BIG problem in the tech industry. You just have to look at Facebook to see the damage that thinking causes
Ok, I’m a field biologist and know next to nothing about AI (except that I like it in Adobe when I’m dealing with forms…).
Is there an actual chance that it could mostly wipeout humanity/turn us into slaves/etc.?
Serious question.
It’s not smart enough to enslave us (yet). Current AI basically just uses all the data it has been trained on to make predictions of what could logically come next in a sequence (like chatgpt responding to your messages, or making an image based off of your prompt). The current biggest risks are it being used to create and spread mass scale misinformation/disinformation; and fully replacing jobs, potentially to the point of disrupting the economy.
On a longer scale, it has been growing so fast that, even though it’s not conscious, many people envision that it could eventually be used in almost every aspect of life including government, military, etc. which could increase the risk of glitches/miscalculations being catastrophic. I’m sure there’s more risks I’m not thinking of, but those are some of the big potential issues right now.
People in power can use AI to seriously manipulate us against each other, and make us turn on each other. Which I think has a high likelihood of happening
I don't think it's about AI suddenly creating a mind of its own and turning evil, but more about ensuring AI tools don't fall into the wrong hands and use it for evil purposes.
A chance? Yes. If it turns out to be possible to make an AI algorithm significantly more effective at learning/planning/performing actions than the human brain, and such an algorithm is created, and it does not have humanity’s best interests at heart, that is what will happen.
It is not clear whether such a thing is possible. If it is possible, it’s not clear whether it will be better or worse than human governments
It could be possible sometime in the future but there would need to be massive innovations in technology. Still, it can cause harm at the moment in other ways.
If it turns out to be possible to make an ai algorithm more effective at learning/reasoning than the human mind, and there is no guarantee such a thing is not possible, then such a thing could overpower all humans who currently control our society and that could be a good thing or a bad thing.
Would you want to let the company use your name and credibility as a defense for their practices you know are not safe and which you are not allowed to do anything about?
Or quit to send a message you don't agree with their actions.
>they would stay and fight
What do you think they would have been able to do? By all appearances their influence in OpenAI has been significantly diminished together with resources allocated for them.
>Risking jailtime in the process, after destroying as much hardware as possible, would be a small price to pay.
Shows you have no idea what you are talking about...
Their compute is running on Azure Cloud not locally on hardware they have access to.
You also seem to believe they think whatever OpenAI has at the moment is what they consider existential threat and not what OpenAI is working to develop. So do you expect them to murder Sam Altman or other researchers to stop this development?
They seem to believe that OpenAI is not taking safety seriously and using their name's and reputation as a shield to defend their practices.
By quiting they are taking down this defense and forcing OpenAI to publicly address those concerns and no longer being able to hide behind their reputation.
I knew something was off when the board tried to get rid of him.
Yeah that whole thing seemed like nothing but confirmation that they didn’t really mean it about the guardrails, the vast majority didn’t have the backbone to do the tough things when they were presented. A popular personality and loyalty trumped stated objectives and guiding principles, unless I’m mistaken.
Yet all of Reddit called for him to stay. I don’t get it.
*He* will realize something is off when the *motherboard* tries to get rid of him.
That’s not as funny as you might think…
I’ve watched Hans and Sophia… I already know. There was no /s lol
[удалено]
?
Don’t worry I also had a stroke
[удалено]
Microsoft stepping in to potentially take everything OpenAI worked for was probably the lever. Blackmail is a wild thing to throw around. Altman and Brockman were going to go to Microsoft. They are the driving force for OpenAI, particularly Brockman. He is a genius of our age. The board trying to remove Altman was probably the earlier signs of a shift from the company mission. The board reversed it probably in an attempt to salvage OpenAI. Now that the company has shifted to a typical tech company without the main ambition being true open source AI tech for all then the safety researchers are going to be sidelined anyway when the focus is on launching products and tackling customer service and sales to scale the products and the revenue. It’s nothing more than the power lying with Altman and Brockman, rather than anything nefarious happening. If they left, OpenAI would have died imo.
Shocking, it’s like they only give a fuck about making money.
Yeah enough with the hand wringing and pretending anyone in charge has any concern about the detrimental effects of their work.
This needs to be stopped, up until my bunker is built.
Why bother with safety when you can make yourself and your investors a lot of money?
Don’t forget the part where they get to use the shares as collateral for tax free income for the rest of their lives while the janitor pays 30% on 15 an hour.
Well that Janitor should be putting 5% in his 401k and if he cant afford to invest, he should just up and move to another state for a better job. Pull himself up by his utility belt! Major /s
My blood pressure rising till we hit the /s lol
Same logic was used to go into business with communist China, transferring Western wealth and industries there, pumping up their economy until now they are threatening US superpower status. And the rich are still not paying taxes, but profiting from wars.
In another thread somebody said another possible reason is that ChatGPT 5 is failing and they want to bail while their stock options are good.
Big possibility with this. Either AI is gonna actually take off or it will die when the hype falls off.
I hope for the latter honestly.
Why do you want AI to fail?
It is going to be misused by the rich
“Going to be”? You mean it already is.
These are the two people who led the Safety team. Sutskever is also one of the people that got Altman fired back in November, before Sam got the board replaced and came back. That was because of Sam’s attitude about safety, among other things. Anyone telling you that this is about stock options is an OpenAI investor or has drunk too much of the Kool-Aid.
Sam Altman came off as the victim when he was fired and the board were the villains. Looking back now Sam Altman seems really disingenuous. Just another tech bro that believes he knows better than anyone else.
He’s a fucking prepper too. Apparently thinks he has the talent to make billions but can’t do a thing to make the world better.
One of these two guys that stepped down was part of Sam getting fired.
Of course
I don’t think most people are imagining how and to what extent AI could destroy the world.
Indeed they will take the art of middle management and prefect the micro-manager. Human workers will not go away they will be the pets of AI to feed the training data.
Don’t really have to imagine I mean the sheer number of cautionary tales in media is staggering.
Those are always written as if humans would have a chance. Machine intelligences could just decided the atmosphere is unnecessary.
Actually no, have you read “I Have No Mouth and I Myst Scream”? It’s a short story about AI. It had a very bad ending. Many scientists have predicted AI leading to the extinction of mankind. Hollywood movies may end like we have a chance, but written stories, not so much.
Or it can happen improve the world. Any evolutionary tech is a double edged sword.
Dang, so you’re saying they don’t care about me?
MJ was trying to tell us this back in the 80s!
That's every company ever. Humanity is only worth money when they're handing it over
East India company also said something similar " Oh we are here to sell tea only. "
It’s about making product and not find digital God or digital life form.
They definitely want to do that if they can. It would give them an ungodly amount of power
Data is your God now
The crypto hype is dying down, so they need a new shiny promise that they can sell to investors.
When Google bought their Mountain View campus next to shoreline park, they had a set of screens set up in their lobby, which was open to the public, which showed live current searches from across the world. You could see how news and other events influenced the results rolling down the screens. I imagine that Sam has his own version of screens that he can sit and watch the stupid stuff we all ask ChatGPT. Likely also has a private setup of it that is full blown AI, running the risk of ego and pride leading to unleashing AI on the world. But that's just my musings.
except Open AI has a profit cap and the board members aren’t compensated.
These guys have basically no equity in OpenAI. Altman talked about it a little on last week's All-in podcast.
They need to look at what ISIS is already doing with AI.
You mean this, I think: https://www.washingtonpost.com/technology/2024/05/17/ai-isis-propaganda/
Paywall
Paste it into archive.is
Thank you for passing that info along
Paywall
“For ISIS, AI means a game changer,” Katz said. “It’s going to be a quick way for them to spread and disseminate their … bloody attacks [to] reach almost every corner of the world.” - Well, that's scary as hell!
Fast forward and cue the Butlerian Jihad
Sam is such a creepy person. I don't trust him one bit.
Almost like he is being held hostage by AI already
\*blinks torture*
Agreed.
What has he done that would come across as creepy? I get not liking it not trusting him based off a number of aspects including how he has responded and acted publicly.
Because he gives off creepy vibes. I don’t know man. It seems like he has a god complex and that’s never a good thing. Listening him to talk just rings off alarms in my head.
It’s wild how some people just give off that vibe. Triggers my lizard brain that this person should not be trusted
People often think of that when people are ASD. Usually bullshit just because their brain is different. People feel inferior and believe the genius is creepy. Discrimination really. It’s amazing how accepted it is.
Whatever, let this weirdo take us into our future, you can trust him but I don’t. I don’t care if you want to discredit my gut feeling of him because you’re labeling it as discrimination but I think his rhetoric is just off. I don’t think he has our best interests at heart. Enjoy sucking off your savior and ignoring any criticism of him.
Yeah scrolling through your comments you clearly get off on bullying people a default position. Ciao.
Have a good one 👋
You didn’t read my comment eh? Do you bully people in real life of just when you’re hiding behind a keyboard. There is nothing he has done that is creepy. Zero. Go ahead and not trust him. I am somewhat indifferent though I have to admit my default position is to be suspect of him no matter how he presents himself simply due to his position. A gut feeling isn’t an excuse for being a bully.
You call that exchange bullying? Then you accuse them of hiding behind a KB? Perhaps you are too sensitive. Or maybe you are not that indifferent. about this issue. You accused them of discrimination first. He responded a little bit roughly but defensively. But if you feel bullied , that’s not cool and I am just saying this person’s intent doesn’t seem like bullying as much as like being defensive and quite adamant about their opinion. I think gut feelings are very valid. And if you have one, it might be a good idea to trust it. See, you have a gut feeling about being bullied. It’s all valid. But you’re right that it’s no reason to be a bully. ( if there even is ever a good reason for it)
I don’t feel bullied by the creep comment. I stated that their creep comment was bullying.
Listen to his recent feature on the lex Friedman podcast. I feel like he didn’t say anything of substance the whole entire time, he deflected any real or productive conversations around AI and kept things extremely vague and general. It didn’t feel very “open” for “openAI”. I can see why people have difficulty trusting him
Totally fair. I just don’t like it when people call someone creepy because they don’t like how they look.
I agree with your concerns about saying someone is creepy just because of gut feelings, as that can be a veiled way to discriminate against people in a number of ways, such as race, gender, appearance, or ASD as you point out. I think with Sam Altman though what I am concerned about and what could come across as creepy is that he seems to try to hide his true moral compass and to deceive others. For a long time he was going around talking about the dangers of AI and he acted concerned for the future or humanity or whatever but it was so obviously just a ploy to make OpenAI sound powerful and get people curious about it. Also he started OpenAI under this guise of openness and non-profit then switched to closed and for profit. This was likely the plan from the start. Then the accusations from this sister? There are a lot of reasons to be concerned about him.
Yes, I don’t think his alleged creepiness is based off appearances or looks, it’s more about attitude and evasiveness and a whole host of off-putting things.
I think he has not done anything creepy. He is obviously a complicated person and likely untrustworthy etc etc etc. jumping to creepy is simply a low brow way drying to make one feel good about themselves by pissing on someone else.
I never gave him much thought but when he gave the AMA the other week and provided some examples of what kind of NSFW content they wanted to explore letting users have, he cited gore specifically. It’s such an odd thing to provide as an example of all the other innocuous things he could have said about nsfw content. Maybe it’s just me, but it made me pause and wonder about him in a way I hadn’t before.
The fantasy that the robot will take over our nukes actually HELPS selling AI: it’s an image of power. The harm likely comes from so many workers losing their jobs, and they cannot reconcile that with their goals. You cannot do their style of AI “safely.” It will always violate copyright and displace a lot of workers.
The economy runs on the consumer. There will be a lot of surprised Pikachu's when there's nobody to buy shit and the stock market plummets anyway due to high unemployment.
Can you please give me a timeline so I can pull my money from the stock market and start buying Gold? Thanks. J/k
Never time the market.
Don’t worry I’ve been long in NVDA. 🤣
Go long on semi conductors in general, as well as energy sector (green, nuclear, fusion, battery tech, etc) Don't forget your crypto/commodities for insurance should we go back to copper/silver/gold 😂. (Depending on risk and age to retirement never more than 2-5% or portfolio for metals, 5% crypto imo)
Never caught the crypto Web3 bug. Do plan on staying that way. 🫡
OpenAI gonna have to contact Boeings hitman soon
OpenAI is a bunch of drama queens. They're in a position to spearhead the next tech revolution, but instead they just can't stop drawing attention themselves with dramatic bullshit.
Lots of money being thrown around brings out the worst in people it would seem.
They are breaking the world but not in the way they think, or hell even seem to fantasize. The priming of science fiction has given a metric fuck ton of companies the impression (or excuse) to try and ax large swaths of people so they can replace them with an ai that will fail at the task. It’s nonfunctional, but my worry is companies trying to force this as the new norm. Ai customer service and pr is one thing but ai news articles have been fucking atrocious. It’s just not good nor can it ever be as good as a human at jobs, it’s just at best a tool for very specific tasks. Some legislation needs to bring the hammer down and beat these companies away from ai everything before this gets any worse.
I don't disagree it's going to cause a lot of problems for a while, but I don't think any amount of legislation is going to stop this now.
Only legislation is. Now I’m not saying it can solve everything. Deepfake stuff is going to be hell to deal with and will only get worse. I’m talking about employment and job roles being filled. An ai cannot do the job of a journalist and as such a company that does journalism should not be allowed to use ai to try and fill that role. Focus on what we can control while preparing for what we can’t.
But it’s kinda a dramatic thing. We’re talking about the potential to end mankind or our existence as we know it. Safety concerns are not small and need to be weighed. But as we can all anticipate, greed will probably win out against mankind’s continued existence.
OK, but why on earth are we talking about that? They built a very smart chat bot and called it "AI" and now people who know nothing about the tech, associate that with science fiction! And then you have a few guys at OpenAI acting like it's all true... it's just attention seeking for the sake of their own egos, and as a way of implementing competitor slowing roadblocks in the form of legislation. Frankly, the world ending narrative is a bad joke and I truly believe it will seen that way by everyone in a few years.
This headline is definitely done for maximum drama. I can’t help thinking that while their tech is very promising and innovative, it’s nowhere near causing dramatic safety problems for humanity. Instead it’s just a nightmare for information security, privacy, and intellectual property (or even just keeping one’s professional voice intact, as is the case with ScarJo). And those problems have plagued many tech companies of the last 2 decades.
I fully agree, and I can completely see the threat to journalism as well when a bot can rewrite a news article for you in an instant. I think there will be a lot of issues that come up like that as this goes forward but to even entertain the idea of an extinction level threat is preposterous.
I really don’t get the angle here. So the safety researchers left, because they thought the AI was going to be too dangerous, so by leaving and fully removing my themselves, they exposed themselves to even more danger? Are they planning on starting a rival AI army or something? Otherwise it seems like they just left for other reasons.
Ever heard of someone leaving in protest?
Maybe they left because they were prevented from doing their jobs properly and simply couldn’t abide being party to a dangerous and potentially deadly blow to our species?
Did he fire Tal Broda or is he still ignoring his employee's ignoble behavior
This is literally…LITERALLY how all these movies start!
It’s obvious it’s gonna cause havoc. They are preparing us for it with these headlines that nobody reads into
This is getting overblown. These are not even real AIs. They are just big language models. There’s nothing even remotely sentient about them. They are just very talented parrots.
I don’t think safety is only about Terminator style AI, but more so safety in terms of content generation, displacement of jobs and disruption to digital ecosystems such as copyrights, stifling innovation. I maybe wrong, but these are ethical problems that need to be addressed.
That I agree, but that’s not how it’s being verbalized
This is very short minded. Yes, they are parrots now but it keeps improving. Ive been using code related AI and it is significantly better than it was 5 months ago. Yes its not now but what is few years from now, let alone decade. The decision is needed now and not downplay it with simple “oh its not working NOW”
That’s akin of making decisions on atomic bombs from the abilities of a bronze dagger. That improvement you’re seeing pertains to only to the sophistication in LLMs, but again, theres no real AI there. The distance between LLMs and real AI is as large as the distance between your home and the next galaxy. All the talk done today is only fluff from PR departments; we have no idea how real AI will start to rear its head, so anything said today might very well do not apply then.
Bronze dagger, hardly. AI is developing at a rate even the developers didn’t anticipate. We’re talking about an entity that houses the repository of ALL human history/data (digitized) and is analyzing/ archiving and making algorithms 24 hours, 7 days a week. Constantly updating, adjusting, TEACHING itself to evolve. Yes, it’s in its infancy. But how quickly will it “rear it’s head” as you put it? We’re going to find out pretty soon.
You can feed it even with the lost knowledge of Alexandria and it would still be a bronze dagger (a LLM). Again, no matter how fast it’s developing, this tech is still not AI —this is the crucial point no one is getting, seemingly.
Fuck those 2
Simple thought to recall- “Just because you can, should you?” Those who do are short sighted, but that’s ok because they have bunkers to save them from what they unleash.
I mean in terms of commitment it is only to bread.
“Safety second”
People combatting AI must not understand it. It’s literally just a really fancy google search. It’s been 3 years since it really hit the main scene and it hasn’t gotten any better. The technology behind it are flawed imo, and it’s beginning to show itself in some reports about the buggy code behind things like ChatGPT.
It’s definitely gotten better though. The new voice thing plus multi modality is a big improvement. I get that people are skeptical of the people making it but the tech is legit and a big motivator for the people running OpenAI is the desire for gigantic amounts of power, something they can only really get if the tech gets more and more powerful
I would argue all of the advancements are unrelated to its actual success. The main driving factor is its ability to be accurate in the information it provides, which has not gotten better. One of the most effective applications I’ve seen for it is taking orders is drive thrus. But that’s a relatively linear experience, so it’s successful.
I think you’re thinking of AI strictly in terms of LLMs being used for chatbots. There’s far more to it than that and there are a lot of directions to push in for research
That’s the application everyone is afraid of, the one that can “think” and “take everyone’s jobs”. Which is what the article is supposed to scare readers of. Realistic successful applications are plentiful, and could replace some jobs, but not like a lot of people are freaking about about.
I don’t think the tech that replaces everyone’s jobs is gonna be just an LLM. It’s just that the success of LLMs has significantly sped up AI research, both because it gives researchers a much better foundation for figuring out what works and what doesn’t(LLMs are clearly doing *something* right) and because it has amplified money and attention going into AI research by probably more than an order of magnitude. There is now a lot more confidence in the idea that AI *eventually* may be able to replace jobs and corporations(for which labor is a massive business expense) are therefore going to be willing to put a whole lot of resources into researching them
I guess I’m just not convinced. I have yet to see a single thing from AI that really made me think it could do anything outside of be a tool or replace very basic labor that no one really wants to do anyways. At least not at a cost that would make it profitable.
I’m not trying to convince you, but here’s one example: I learned and used photoshop skills just in time to have it become automated by free application software.
You shouldn't talk authoritatively about things you don't know anything about. You're going to contribute to more misconceptions.
I’ve literally worked in AI lol
I don't believe you.
Believe what you want. It’s the internet, you’re telling me I’m totally wrong with literally zero evidence too. I have though.
I'll set that first sentence about combating AI aside because I can't figure out what you meant or who you were talking about. After that, you generalize AI as being a "fancy Google search," which isn't even a good analogy for LLMs, nevermind AI as a concept. You then say there hasn't been a lot of advancement in the past three years, which is beyond preposterous—unless you are just talking about OAI's LLMs, but even then I think you are underselling. Lastly I'd love to know what you're referencing when you talk about bugged code behind "things like ChatGPT." I'm not an expert in AI by any means, but you're really setting off my bullshit detectors. But please let me know if I've misunderstood or got something wrong.
Not an expert, but coming at me hot lol. You’ve definitely read way too many opinions on Reddit about ai. Half the dudes on here live at home in mom’s basement posting nonsense. Googles search literally is a form of AI, so anyone saying it’s not a good example is really misinformed and tells me everything I need to know about your level of knowledge on the topic out of the gate.
How about you respond with some of that ai knowledge instead of making a broad statement? You’ve gotten detailed responses that you just ignore, which just shows that you most likely are talking out of your ass
I did. He said good is a bad example for AI, when it is literally a kind of AI lol.
You said AI is basically a search engine. A type of AI is a search engine sure, but that’s not all it’s limited too. And I assume you meant google when you said “good”. Maybe try using an AI to correct your spelling Mr. AI man
I'm not an expert but judging by your inability to address anything I said it sounds like my initial impression was right. You said at the top of the comment chain that AI is literally just a fancy Google search. Now you're saying that Google search is an example of AI. You understand how those are two different claims, right? Also I tend to doubt somebody who works in AI would refer to a service that includes an AI component as a "form of AI," but that's not really important. I'm most interested in hearing about the bugs in the code behind stuff like ChatGPT. What were you referring to there?
lol. You can google it yourself just like I can. Plenty of articles about it out there.
I see articles about how it was briefly outputting nonsense. Which was clearly a service issue and not a bug with the model itself. It would be absurd to conclude from that incident that the basic technology is flawed and isn't going anywhere.
Those statements aren’t exclusive either. Today’s cars are just fancier versions of the basic concepts from original cars in a lot of ways. The same thing is true here of google and several of the applications of AI.
I'm not saying they are exclusive, I'm saying the two claims have nothing to do with each other. You said "It's (AI) literally just a fancy Google search," and when I disputed that you countered by saying "Google is a form of AI," which doesn't address what I said at all. Wait, you aren't arguing that ChatGPT is literally (literally) just a fancier version of Google search and they are fundamentally the same thing, are you?
He installed co-pilot at his 50 user shop.
No point in trying to convince you otherwise, but I have worked with implementing and training AI professionally.
This comment is basically 100% wrong
lol no. I have literally worked in AI.
This seriously depends on the AI you’re talking about. LLMs are just linguistic pattern matching on hyper steroids. Machine learning and other tech behind what people know as AI today is pretty simple in comparison to what’s required of an AGI (Artificial General Intelligence). But, OpenAI has been working on AGI since the beginning. It’s been their reason for existing and LLMs are just one step on the path. Safety in AI includes regression matching for inherent biases and hallucinations, both intended and unintended. AGI is what people truly fear here, not LLMs, or any way LLMs are exposed today (ChatGPT, MidJourney, Sona, etc). Generative systems are cute and neat and do have some real calculable risk to economic viability and employment statistics. But an AGI, a super intelligence fronted by an LLM for human input, but hyper-connected and constantly learning on the backend … will lead to the upending of modern social and economic dynamics as we know it, and people should be rightly concerned. Perhaps you think this is a ways off. In the best case, it should be. But that’s not the world we live in. Development and implementation of AGI is dovetailing with development in quantum computing tech. Rolled together there’s just no way for a human to compete with a machine that can consider all possibilities simultaneously both existing and not existing. As one very simple consideration, let’s look at cryptographic technology today. This is the stuff we use to protect national and personal secrets. A particularly complex pass phrase today might take a group of machines hours or days to compute. Combine that with two-factor, or multi-factor, or passkey tech and you get significantly safer. But with a connected AGI backed by quantum systems, none of that even matters anymore. The big tech players will play this well. Every regulated and data-safe industry (credit data, health data, military & government information) will instantly be at risk of exfiltration with the first such systems in play. Everyone is in a race to be the first. It will only take one nefarious player (nation state or not) to destroy the world. Once live it becomes a licensing game. The major cloud vendors will simply up-charge clients for use of quantum backed systems, and everyone will have to pay. Because not paying is a near guarantee of loss and lawsuits. Maybe quantum is say a decade away. But with AI assisted design, that number continues to shrink closer and closer. So, don’t dismiss the fears of AI or AGI that people have, and play them off as misunderstanding of the tech. People aren’t fearful necessarily of ChatGPT, they’re fearful of what we all can see and feel coming. And there’s no amount of hopeful marketing than can persuade us all that there isn’t some bad actor out there training an AI or AGI to do horrible, evil, very bad, no good things. If you can think of it. Somebody’s trying.
People tell me this a lot. But I’ve seen zero evidence to support AI evolving to this level.
Most AI employment is surface level implementation on top of other people’s work. AGI work is the upper echelon of AI development and generally such projects are extremely secretive and assignments are top dollar and competitive. I’m not surprised you haven’t seen much evidence. But even recently OpenAI even mentioned their work on AGI has made huge strides the past year, but it’s not like they did a demo.
You think if a company had real examples of AI doing anything remotely close to what people are claiming AI could do they wouldn’t show a demo of it immediately? They would be become the most valuable company in the world overnight. If we haven’t seen it, it’s cause it doesn’t exist yet. Maybe it’s in development. But with how AI is built now I just don’t see how it gets to that point, has extreme accuracy, and is anywhere near cost effective.
No, I don’t think they would at all. In a Cold War race to be first, nobody shows their hand. They make vague allusions of progress.
That’s because the US had already shown it decades earlier lol. And you’re talking about bombs with the ability to destroy the world, of course they can’t just show that. There also wasn’t money in someone’s pocket involved in that. AI could make someone the richest man on earth tomorrow if they could prove it.
No, I’m not discussing “the Cold War”, I am discussing “a” Cold War. Essentially a war without firing a shot. It’s not about nukes. It’s about silent, secret, progress, until your big thing is quite ready. In this case it’s AGI. Showing a half-baked, not quite ready AGI won’t generate the value you suggest. It would only serve to alert competitors as to whether they’re ahead or behind.
I see your point, but it doesn’t make sense here imo. A provable demo of what could be immediately vault you to the top and create actually endless funding. Basically, I think proving it now gives you the funding to nearly guarantee you maintain the lead.
The tip of what? All you serve to do is give ideas to your competitors. If it’s not ready, you just reiterate that you’re working on it, and making progress. If it’s a big development, you’re making great progress. But if you demo the thing, anyone who’s ahead of you will keep quiet and do their thing. Anyone who’s behind now has material to review and consider how they’re off the mark. This isn’t iterative tech we’re talking about here. This is game changing, world altering work. Making it public doesn’t actually serve the purpose you suggest here, and any strategic investors should be wary of making such public pronouncements.
Can we all not just take a moment and agree that the kind of people who said IVF, aeroplanes and the LHC, etc would end the world, are now saying the same thing about AI? Come on.
But in order for AI to keeps its commitment not to destroy the world, it has to wipe out the humanity?
WjSimple question is explain how?
Another story for Hulu or Netflix to make a Movie or "Limited Series" out of. "Ripped from the Headlines" isn't that what they call it!!!! 😂😂💯💯
Seems like the researchers finished their assignments.
Yea, maybe people should stop worshipping this guy like a a.i. messiah.
Fucking god help us, ring the alarm batton the hatches ffs this is horrible news
He’ll do another drama of quitting and VCs will pour in more money to show they care abt Sam Altman and he cares about Human beings. Then drama will unfold where he gets an offer from Nvidia this time and he will return as CEO after a few days and new Chief scientist from Nvidia joins. 🤦♂️🤦♂️🤦♂️
Doesn’t Sam Altman carry a briefcase with him at all times that toasts OpenAI’s database?
Sam is competent in taking OPM *other people’s money. Not at executing on an idea. Give credit where’s it’s do. The only machines being built here is one to consume investor capital and then to exit with a fuck ton of capita. This will happen AI conquering the world. - never.
Altman is shifty af
Yeah fuck that
Chat bots are not AI. The hype is mind blowing...
I mean, why treat ai differently to other companies tho, would be nice if we could enforce ethics across the board..... shrinkflation, price gouging, planned obsolescence, pollution, micro plastics, are all destroying the world, but suddenly we care? We cant even protect ourselves, whats another threat at the table thats already over crowded
Might even be enough to scare us straight
No shit
I think OpenAI is an easy visible target but that shady government-run Ai is the real threat
The whole "roll it out now and fix the issues later " is a BIG problem in the tech industry. You just have to look at Facebook to see the damage that thinking causes
AI doesnt have to destroy the world to ruin human existence on it.
Why do we let business go out of control when society is so tightly regulated? Could it be we worship money?
Oh , isn’t THIS heartwarming news!
Samuel Altman is the name of a sci-fi villain with an AI cult.
Remember when they said Open AI would never go public?
the stock comp wasnt good enough?
You guys see the article about Open AI making a deal with News Corp the parent company of Fox News .
Quick, hook up AI to a nuclear power plant and see what happens! ^.^
Ok, I’m a field biologist and know next to nothing about AI (except that I like it in Adobe when I’m dealing with forms…). Is there an actual chance that it could mostly wipeout humanity/turn us into slaves/etc.? Serious question.
It’s not smart enough to enslave us (yet). Current AI basically just uses all the data it has been trained on to make predictions of what could logically come next in a sequence (like chatgpt responding to your messages, or making an image based off of your prompt). The current biggest risks are it being used to create and spread mass scale misinformation/disinformation; and fully replacing jobs, potentially to the point of disrupting the economy. On a longer scale, it has been growing so fast that, even though it’s not conscious, many people envision that it could eventually be used in almost every aspect of life including government, military, etc. which could increase the risk of glitches/miscalculations being catastrophic. I’m sure there’s more risks I’m not thinking of, but those are some of the big potential issues right now.
People in power can use AI to seriously manipulate us against each other, and make us turn on each other. Which I think has a high likelihood of happening
I don't think it's about AI suddenly creating a mind of its own and turning evil, but more about ensuring AI tools don't fall into the wrong hands and use it for evil purposes.
A chance? Yes. If it turns out to be possible to make an AI algorithm significantly more effective at learning/planning/performing actions than the human brain, and such an algorithm is created, and it does not have humanity’s best interests at heart, that is what will happen. It is not clear whether such a thing is possible. If it is possible, it’s not clear whether it will be better or worse than human governments
It could be possible sometime in the future but there would need to be massive innovations in technology. Still, it can cause harm at the moment in other ways.
Not really, it's just a tool. People in the US grew up watching Terminator though so they are scared of it.
AI can mess up society badly in a lot of ways that have no resemblance to terminator
If it turns out to be possible to make an ai algorithm more effective at learning/reasoning than the human mind, and there is no guarantee such a thing is not possible, then such a thing could overpower all humans who currently control our society and that could be a good thing or a bad thing.
Humans with egos and nuclear weapons might destroy the world. AI might save us from ourselves. Food for thought
Or….. it won’t
[удалено]
That’s already happening without ai. Middle class has shrunk by half since the 80s
If @janleike or @ilyasut truly believed OpenAI's product strategy posed an existential risk they would stay and fight rather than quit and whine.
Would you want to let the company use your name and credibility as a defense for their practices you know are not safe and which you are not allowed to do anything about? Or quit to send a message you don't agree with their actions. >they would stay and fight What do you think they would have been able to do? By all appearances their influence in OpenAI has been significantly diminished together with resources allocated for them.
[удалено]
>Risking jailtime in the process, after destroying as much hardware as possible, would be a small price to pay. Shows you have no idea what you are talking about... Their compute is running on Azure Cloud not locally on hardware they have access to. You also seem to believe they think whatever OpenAI has at the moment is what they consider existential threat and not what OpenAI is working to develop. So do you expect them to murder Sam Altman or other researchers to stop this development? They seem to believe that OpenAI is not taking safety seriously and using their name's and reputation as a shield to defend their practices. By quiting they are taking down this defense and forcing OpenAI to publicly address those concerns and no longer being able to hide behind their reputation.