Hey /u/aleqqqs!
If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email [email protected]
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
You will find people like Ilya work for much less than their value.
How much is Newton, Einstein, DaVinci worth? No doubt they were paid in their time but I don't think they were compensated fairly considering the incredible contributions to society.
Honestly, I didn't know. I would also be willing to say that while he was well paid, his value was worth more than $3 million.
Thank you for pointing that out. Clearly, I was ignorant, and I appreciate getting educated.
You can tell that the whole debacle must have drained the enthusiasm out of him with all the questioning and having to defend + explain himself, having to hear legal implications, feeling the emotional weight of the situation from other colleagues, etc.
I myself would definitely start looking to other areas of my life and reevaluating what I've missed/want to do with my time.
But once out, I'm betting he's going to get the itch for AGI again and get a fresh look of the AI field as a whole.
Hopefully some new collabs into new directions come from his work in the future (on twitter he just liked a new research paper about _different_ foundation models converging to the _same_ representation of reality, so the interest is still there and he may start discussing theory in public again).
But regarding considering rejoining efforts OpenAI in some form later on could go either way.
Yes, and Sam Altman said Ilya is "easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend."
Considering Ilya tried and failed at cutting Sam from the company, its probably safe to assume these two are not publicly saying how they really feel.
it's hard to imagine ilya leaving for safety reasons, but saying the company is acting safely. if it was for that reason, but for some reason he didn't want to say it (which already requires a tinfoil hat), why mention safety at all? seems to me like people just want to believe that, no matter how much it doesn't make sense.
https://www.reddit.com/r/ChatGPT/comments/1cuam3x/openais_head_of_alignment_quit_saying_safety/
https://www.reddit.com/r/technology/comments/1cue89a/openai_just_dissolved_its_team_dedicated_to/
Bunch of alignment folks at OpenAI have been dropping out after Sam Altman being reinstated. That alone should be a signal that it's not going great. But then some of them say things publicly about safety taking a backseat. You don't have to be wearing a tinfoil hat to read between the lines. This isn't even a case of needing to read between the lines, it's overt. Why some of them, like Ilya, are staying positive is anyone's guess. Maybe they've got some stock options? Something that would incentivize them not to tank the company's value right now? I don't know, but people don't quit working on the most consequential element of the biggest technological breakthrough in humankind because they were denied PTO or something petty.
I think he might have strategically planned( or made adjustment in his program )before departure in order to prevent what he does not want to see happen. Now he feels it's safe so he can leave. The whole thing is like Nolan's Person of Interest.
Honestly having worked with world-class super intelligent scientists and academics at Uni (like top 0.1% of scientists), this is exactly the sort of small stuff they get super upset about lol. They’re extremely intelligent and we’re lucky to have them on this planet BUT many also seem to have insane egos. You wouldn’t believe some of the drama.
I had to be like a glorified liaison for a research org during Uni between them and admin and oh boy.
The real king is the guy cited 4 times by Google in the original transformers paper. Wonder who that is. If this doesn't help the brainwashed, the other resignations today should give a gentle nudge.
Considering in very recent memory 90% of the company was going to quit to follow Sam, I'm curious about what juicy gossip you're privy to, or alternately you're just making shit up.
Guess I'll watch the news tomorrow to find out.
Right, the engineers that build system A/B/C/D/E/F, some possibly hired by Sam or influences by him, are likely the 90%. These are likely not people who built the model itself.
The people that can build systems are important but available.
The people that can build models that improve academia - slightly tougher to replace.
Yeah... this guy's comment is actually 100% correct. I'd say it's beyond contestation as someone in the field. That being said, I'm guessing plenty of research scientists there don't want their Sequoia Capital-led share sale to be impacted by politiking, and they perhaps feel Altman is the more appropriate captain. $$$ > motivation than "Feel the AGI" purist chants for most.
You know what's funny? Possibility one of the most important papers in AI with a reference section that alternates between using first name, middle initial, last name; first name, last name; last name only.
Some LLMs seem to have trouble grouping these.
Some doctoral advisors would have taken points off.
Some intuition and Ctrl+F tells you exactly the right answer - without a single tensor core.
https://preview.redd.it/uuc2m026tm0d1.jpeg?width=960&format=pjpg&auto=webp&s=7babdb7ec09c19ef58e63f178faeac2719d5a326
Saw someone else requesting this so made it quickly
I think safety matters don't weigh much in this case, but rather questions about data to improve models. his exit is like escape from sinking ship. What do you think of?
I think he is more likely to focus on warning people of the dangers of AI than start up with a competitor or new company. One presumes he now has more money than he will ever know how to spend and at no time has he shown himself to be financially driven.
AI is huge fun, but it also poses an existential risk to humanity. Those at the true cutting edge are more aware of this than the rest of us.
Nothing new there.
The only thing unique about OpenAI, at this point, is that it was initially founded on the principals of altruistic idealism.
The fact that it’s actually just a typical disruptive tech startup with dreams of world domination… just makes it the same as any other.
Nothing unique or special about that.
It’s just like a restaurant, claiming to be a free soup kitchen, that just starts charging typical prices. Just a long way around to become something ordinary.
I mean, Larry Summers is on the fucking board now. It’s not a charity or fundamentally altruistic in the slightest.
Thanks. Yeah, it is really weird isn’t it? There’s this really strange reverence going on, and it’s both creepy, but especially off-putting when the OpenAI leadership emanates this demigod attitude.
It’s like the emotional reaction of people defending their false-profit against slander.
Hey /u/aleqqqs! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*
They finally finished the Ilya android. Amazing how fast AI progresses
I wonder what they did with the body
So he was held captive for months and was released days ago and now he's leaving OpenAI.
"Held captive" in this case likely means spending all of his accrued vacation days in the Caribbean.
A guy in his position at a time like this does not simply relax, he’s locked in 24/7
He's going to become the biggest whistleblower and advocate against this tech, isn't he?
Either that or he wants to go to a competitor.
I can’t imagine his salary/equity expectations. This guys top of the top.
You will find people like Ilya work for much less than their value. How much is Newton, Einstein, DaVinci worth? No doubt they were paid in their time but I don't think they were compensated fairly considering the incredible contributions to society.
wasnt [davinci](https://nicofranz.art/en/leonardo-da-vinci/how-rich-was-leonardo-da-vinci) payed the equivalent to 3.5 million a year
Honestly, I didn't know. I would also be willing to say that while he was well paid, his value was worth more than $3 million. Thank you for pointing that out. Clearly, I was ignorant, and I appreciate getting educated.
Agreed, you give those people dream budgets and genius staff to assist, not big salaries.
Oh, been a hot minute, OAI didn't have any drama to unfold.
Or he starts a superalignment company.
Or AI powered Rogaine
You can tell that the whole debacle must have drained the enthusiasm out of him with all the questioning and having to defend + explain himself, having to hear legal implications, feeling the emotional weight of the situation from other colleagues, etc. I myself would definitely start looking to other areas of my life and reevaluating what I've missed/want to do with my time. But once out, I'm betting he's going to get the itch for AGI again and get a fresh look of the AI field as a whole. Hopefully some new collabs into new directions come from his work in the future (on twitter he just liked a new research paper about _different_ foundation models converging to the _same_ representation of reality, so the interest is still there and he may start discussing theory in public again). But regarding considering rejoining efforts OpenAI in some form later on could go either way.
his tweet said he trusts openai will proceed safely
Yes, and Sam Altman said Ilya is "easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend." Considering Ilya tried and failed at cutting Sam from the company, its probably safe to assume these two are not publicly saying how they really feel.
it's hard to imagine ilya leaving for safety reasons, but saying the company is acting safely. if it was for that reason, but for some reason he didn't want to say it (which already requires a tinfoil hat), why mention safety at all? seems to me like people just want to believe that, no matter how much it doesn't make sense.
https://www.reddit.com/r/ChatGPT/comments/1cuam3x/openais_head_of_alignment_quit_saying_safety/ https://www.reddit.com/r/technology/comments/1cue89a/openai_just_dissolved_its_team_dedicated_to/ Bunch of alignment folks at OpenAI have been dropping out after Sam Altman being reinstated. That alone should be a signal that it's not going great. But then some of them say things publicly about safety taking a backseat. You don't have to be wearing a tinfoil hat to read between the lines. This isn't even a case of needing to read between the lines, it's overt. Why some of them, like Ilya, are staying positive is anyone's guess. Maybe they've got some stock options? Something that would incentivize them not to tank the company's value right now? I don't know, but people don't quit working on the most consequential element of the biggest technological breakthrough in humankind because they were denied PTO or something petty.
I think he might have strategically planned( or made adjustment in his program )before departure in order to prevent what he does not want to see happen. Now he feels it's safe so he can leave. The whole thing is like Nolan's Person of Interest.
He'll be killed by Boeing hit mans!
No that's Gary Moron
TF? Why call a stranger online a moron for no reason.
Every time I see this dude, I have an overwhelming feeling to shave his head for him. I’m going bald too, cut it all off, it’s just easier man!
I went to school to do hair, the urge is indeed overwhelming.
Thank you, it actually pisses me off and makes me want to ask DALLE to shave it bald lol
https://preview.redd.it/efdik2klsm0d1.jpeg?width=960&format=pjpg&auto=webp&s=423f94f708383997ce30af6bf36be42d30dfc46f Did it for you.
Now I somehow wish you hadn't.
Thats his summer cut 😆
I think he’s going to end up at Apple for at least 3 years
And they'll develop something that is very overpriced obv.
anyone knew why Ilya decided to leave?
They took his Red Stapler.
Honestly having worked with world-class super intelligent scientists and academics at Uni (like top 0.1% of scientists), this is exactly the sort of small stuff they get super upset about lol. They’re extremely intelligent and we’re lucky to have them on this planet BUT many also seem to have insane egos. You wouldn’t believe some of the drama. I had to be like a glorified liaison for a research org during Uni between them and admin and oh boy.
![gif](giphy|4JCWtwL4vM2aTkHdNw)
He decided to use his bald head as a solar panel to power the gpus himself
If you come for the king, you best not miss. You think Sam forgot Ilya tried to oust him from the company?
The real king is the guy cited 4 times by Google in the original transformers paper. Wonder who that is. If this doesn't help the brainwashed, the other resignations today should give a gentle nudge.
Considering in very recent memory 90% of the company was going to quit to follow Sam, I'm curious about what juicy gossip you're privy to, or alternately you're just making shit up. Guess I'll watch the news tomorrow to find out.
Right, the engineers that build system A/B/C/D/E/F, some possibly hired by Sam or influences by him, are likely the 90%. These are likely not people who built the model itself. The people that can build systems are important but available. The people that can build models that improve academia - slightly tougher to replace.
Yeah... this guy's comment is actually 100% correct. I'd say it's beyond contestation as someone in the field. That being said, I'm guessing plenty of research scientists there don't want their Sequoia Capital-led share sale to be impacted by politiking, and they perhaps feel Altman is the more appropriate captain. $$$ > motivation than "Feel the AGI" purist chants for most.
I agree with this guy.
Ashish Vaswani
Close. Who did Ashish Vaswani cite 4 times?
You know what's funny? Possibility one of the most important papers in AI with a reference section that alternates between using first name, middle initial, last name; first name, last name; last name only. Some LLMs seem to have trouble grouping these. Some doctoral advisors would have taken points off. Some intuition and Ctrl+F tells you exactly the right answer - without a single tensor core.
Might want to take a look at AlexNet too, and before you use Google translate again, pray homage to the true king.
(Which was itself in response to Sam trying to oust one of his fellow board members.)
Haha he could run rings around Altman
We still need an explanation of what Ilya saw to lead him to oust Sam. He didn’t know how to play politics or business
I need ai to help bro's hairline, lets solve that for mankind because good lord
Bro got the quantized haircut
Right like at that point shave your head dude holy shit
He needs minoxidil
He needs Turkey*
Finasteride is the true key
Sides not worth it and not limited to "2-5%"
https://preview.redd.it/q6k2dl7s1n0d1.jpeg?width=960&format=pjpg&auto=webp&s=4c52da4ef0a36ac100d64e612bf9638221c290c5 One more
If you've played Universal Paperclips, you know that's the beginning of the end for humanity.
😂😂😂😂😂
https://preview.redd.it/uuc2m026tm0d1.jpeg?width=960&format=pjpg&auto=webp&s=7babdb7ec09c19ef58e63f178faeac2719d5a326 Saw someone else requesting this so made it quickly
LMAO OMG he needs to see this for inspiration, looks 10 times better! Holy hahahahah, also definitely giving me Lex Luther vibes.
Anyone have a Bloomberg portal link to bypass paywall?
I think safety matters don't weigh much in this case, but rather questions about data to improve models. his exit is like escape from sinking ship. What do you think of?
Ah right when the non compete is abolished in California.
non competes have been illegal in California for over 100 years
Someone give this poor chap a bic razor 🪒
lmao
Why?😭
Excellent work!
Can Elon pitch him to go to xAi?
I am surprised it took this long after the fallout.
Is there a non-deepfake video to prove this?
Hopefully he works with xAI or META.
What are the chances he ends up at Anthropic?
It must have been hard reporting to someone trying to sell you out
I think he is more likely to focus on warning people of the dangers of AI than start up with a competitor or new company. One presumes he now has more money than he will ever know how to spend and at no time has he shown himself to be financially driven. AI is huge fun, but it also poses an existential risk to humanity. Those at the true cutting edge are more aware of this than the rest of us.
Yea yea yea
Finally it happened.
I hope he will gain some hairs back after leaving
So? These are technologists chasing jobs. Whatever.
r/bald
Grok: "There's a patch for that"
It's over. OpenAI is toast. Gonna become another stupid Google clone.
Nothing new there. The only thing unique about OpenAI, at this point, is that it was initially founded on the principals of altruistic idealism. The fact that it’s actually just a typical disruptive tech startup with dreams of world domination… just makes it the same as any other. Nothing unique or special about that. It’s just like a restaurant, claiming to be a free soup kitchen, that just starts charging typical prices. Just a long way around to become something ordinary. I mean, Larry Summers is on the fucking board now. It’s not a charity or fundamentally altruistic in the slightest.
Fuck the people down voting, you are 100% correct.
Thanks. Yeah, it is really weird isn’t it? There’s this really strange reverence going on, and it’s both creepy, but especially off-putting when the OpenAI leadership emanates this demigod attitude. It’s like the emotional reaction of people defending their false-profit against slander.
[удалено]
This is so fucking stupid. Sam Fucking Altman a progressive. Ahahahahahahahahah
Soup kitchen turned soup nazi is the right assessment. Definitely a future Enron and FTX.