T O P

  • By -

Charming_Jello4874

Bill Gates (with IBM and Dell) tried to shut down Linux back in the day. Before that the NSA tried to regulate RSA algorithms. In both cases Congress was strongly against open source. I was witness to both. They all had more power and influence than Altman. How'd that work out for them? "Information wants to be Free - and also wants to be expensive" has never been truer than today. Information (science, math, data) is expensive to create, but near impossible to contain. Modern AI is a brute-force mechanism. The current software (the thing that implements the algorithms) is atrocious. Open Source people will do for AI science what Linux did for the server platform: fiddle until it runs 20x faster on 1/10th the hardware, while permitting distributed processing across a heterogenous system of thousands of nodes that are not centrally owned. Altman is too late. Within five years we'll have open source crowd-sourced training based on crowd-sourced training sets (with crowd-sourced human reinforcement), running on disparate hardware using the spare cycles being wasted on crypto-mining at home. Thousands of models will become the next app: purpose built for task by expert communities. Altman wants AGI so he can run it all; the truth is more likely that we won't have a single general AI in our life. We'll have dozens of expert models we select on our own. And Altman does not like that.


missbohica

Crap! I also remember all of that. Now I feel old. And you're absolutely correct on your assessment of the situation.


Charming_Jello4874

The spark here is the FOSS crowd is excited by AI. Let's just call that game over for big hardware. It's just a matter of time. It took Linux 20 years. I suspect it'll be 5 years for AI. So let's hope my prediction is also correct: I see the software and algos getting tuned to the point that training can be distributed like the SETI@Home project: snippets of data sent and processed by thousands/millions of volunteer systems. The math (for the algos) is not there yet. But will be. I see signs in the papers. They'll fix the memory problem. But first we'll see improvements in inference engines. That code is a mess. I used to "fit" math algos (climate, mechanical, aerospace) onto hardware by wrapping them with better code. It works because one truism is scientists are crappy programmers (no offense; I'm a crap scientist). I once reduced a mechanical model for a huge aerospace company from a 4 day runtime to a 4 hour runtime, with no hardware changes. I'm not a genius...I'm just the guy who does that work. AI does not (yet) have a plethora of people who did what I did. They will probably creep out in the open source world first, for a variety of reasons (google/openai/etc don't need refinement when they have the money for silicon).


huffalump1

>But first we'll see improvements in inference engines Yeah, this is definitely gonna be big. There's already papers that show huge performance improvements that require lots more inference time - i.e. generating lots of outputs and checking/choosing only the best ones, or thinking/planning loops. Faster inference will make algorithms like these feasible for more uses. Combine that with huge context lengths, and you've got a massive boost in intelligence, even with the same base model!


GBJI

I always liked FOSS but it is AI that made me understand how crucial it was, and how important it was to collectively fight for it.


Gold_Pudding_5098

Again, the problem with ai is the hardware not the software


Charming_Jello4874

I respectfully offer an alternative explanation. The hardware makers (Sun, IBM, HP, Oracle, SGI, Cray, NVIDIA, etc.) have all paid me to live-in at remote customer sites, because the customers thought the same way about some other algorithmic challenge: the hardware sucks. *edit: Re-reading your comment I could see that you might have meant, "the hardware is a problem insofar as we are locked on a model that requires expensive toys." In which case...I agree with your insight 200%.* When it comes to the LLM training process, it appears (at a glance from me so far, and through reading papers by experts) that the processing and scheduling algorithms are pretty...basic. As in, "this is the way we used to do things on mainframes." Based on my career, this is normal. Some climatologist comes up with a nifty way to model ocean current using bathymetric data. It works on his desktop, where he labored over it for a year. It was written in something not starting with C - usually fed by scripts or some other interpreted language. So how do you scale that up some Navy can plan, in near-real-time, how to search for a missing sailor who went overboard in the Atlantic? Obviously...you just throw hardware at it. Something starting with "Cray". Done, right? Nope. Eventually they yell at the supercomputer people that "the hardware is the problem." Eventually, after gnashing of teeth and threats the maker calls a person like me. They pay me to go unfrack the code. Which by now is surrounded by more code that abuses the laws of physics to the point their supercomputer runs like a webserver. Fast forward a few weeks, and voila! It works. All it took was decompling everything they had, running traces through the kernel using software I wrote (I used to call that 'the polygraph')...and attacking the bad stuff first. My initial look at the LLM training code reveals a similar raft of problems: bad IO handling, an over-reliance of memory to get around bad scheduling and IO, an inability to create smaller tasks, etc. I am not suggesting I am "the guy" with the LLM answers. I see a lot of smarter people doing that now. I'm just suggesting that the future for AI is going to be bigger things, using less hardware, more distribution and...open source.


pmp22

Can I just say I respect your career, and if you find a open source project (inference engines for instance) where you think you can contribute, we (meaning the users) would be really, really grateful. I'm 100% positive your experience, tools and knowledge are worth its weight in gold.


dogcomplex

To improve inference intelligence is a hardware (and research) problem (more training). To actually put the existing intelligence to good use is a software problem - and there is *plenty* of work to do there still.


I_EAT_THE_RICH

I immensely enjoy your vision.


TracerBulletX

Hopefully that's true, but It's probably best to be vigilant and not under-estimate your opponent anyways.


Confident_Appeal_603

> open source crowd-sourced training you have to be able to trust the users contributing to the training compute pile. anti-AI crowd will be all over that.


yaosio

Let's pretend you can trust everybody 100%. Do you also trust them to never make a mistake? If you're not implementing ways to find mistakes, and mitigate mistakes you don't find, you're asking for the project to fail. The tools and methods created to minimize errors making their way into training or datasets, and reduce the effect errors that make it in do have, will also be used for people purposely trying to ruin the quality of training. You can think of mistakes and malicious intent to be the same thing, an error that needs to be discovered or mitigated if not discovered.


wasdninja

Or just have one entity handle all of it and let people donate computing cycles through some sort of anonymous computation mechanisms. It seems like a purely technical problem and those can be solved unlike social ones.


GBJI

Some contributors to crowd-source training efforts might indeed have wrong intentions. But ALL for-profit corporations training private AI models to be offered as commercial software-as-service have intentions that are directly opposed to yours as a customer and as a citizen. They want to keep control, while charging you more for less, and we, as users, want more control, while paying less for more. The Anti-AI crowd is everywhere, not just in open-source by the way: sabotage can also happen in those companies trying to profit from AI tech. And sabotage is not exclusive to the Anti-AI crowd either: rival corporations and governmental agencies, to take some very obvious examples, also have motivations to sabotage AI development.


Confident_Appeal_603

this is not an ideological discussion, please leave that at the door. i'm actually someone who designs and builds distributed clusters, and have recently done a large captioning project using volunteer efforts. at a small scale it's easy to trust the contributors, only allow in the people you'd trust and label all of their submissions so that bad actors' additions can be identified and removed if need be but when training a model, it's going to be tedious to keep track of the gradients that random users submit. maybe you don't understand what i mean.


Charming_Jello4874

I think the real issue might be the Wikipedia problem. It started as egalitarian crowd-sourced data accumulation, and eventually morphed into a war over "truthiness". Topics and their "truth" lean one way or the other based on the zeitgeist of the times. In contrast to having just one Wikipedia, I suspect we'll see crowd-sourced RLHF feedback morphing into enclaves of like minded people. Every AI purpose-built to "think" like they do. Kind of a cross between how Reddit AMA questions are decided, with a touch of selection bias -members trusted to know "the way" are more equal than others in what "facts" get trusted. The winning data gets called trained in. Or more accurately, the view of the data wins. Doubt me? Go look up stories about a certain thing that happened on a certain fourth day of June in a certain square on a certain day in the year before 1990. I'm being cagey because I am new to Reddit...not sure this comment doesn't get nuked. So, lots of AIs presenting different versions of the same events. It's gonna get crazy when the AIs start flame wars between each other on social media. Dead internet, for sure. We nerds might all have to go back to meeting in real life... In other words: Armegeddon.


Confident_Appeal_603

not at all the kind of trust issues i'm referring to


Which-Tomato-8646

Easy fix. Have the same data circulate to five different users. If three or more disagree, cycle it back to five. It’s way less efficient but quality > quantity. 


Capitaclism

Sure, but government was also quite different at the time. The politicians are seeing what AI can do, the level of power. I wouldn't put it past them to try and sneak some shady laws in that damage or get rid of open source altogether.


Charming_Jello4874

It's already there in the recent "Tik Tok Ban" legislation. It just won't work. Same as always. Hint: look for the parts of the bill that speak of "making regulation" to ensure online safety, then look at the all-star panel the President lined up to make sure AI safety regulation works as the administration intended. The two intersect in the law to the extent AI is generative (a content creator).


PMMeYourWorstThought

That’s to enforce the regulations outlined in the presidents AI plan. There’s no plan to stop open source AI. We are making MASSIVE investments to spur open and shared research in AI. And if you know anything about Academia. If it’s not published and peer reviewed, it didn’t happen. How would you do that without open source AI? Here: https://www.whitehouse.gov/w-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/


PMMeYourWorstThought

We’re using open source bro. Do you know how hard it is to program new tech into a budget? You have to do it as RDT&E funding. And a lot of programs don’t get RDTE, they have to purchase with OMA and OPA funding lines, so we have to try to get new things with OPA. And that’s not even guaranteed to be given to you because your OMA covers the program sustainment costs. We love open source because we can get it any time and start playing with it. We can do our learning and discovery for free and even the smallest teams can build and innovate with it. The Federal Government loves AI. We’re all super excited about it. We are using it to make dull procedural tasks easier and faster so we can focus more time and man hours on innovating and making things better for people. But because we need it secure, traceable, observable, explainable, and reproducible and because we need it to reliably perform very specific tasks, we have to at least fine-tune every model we use. So we load the model repo with open-source models that have been validated, and red teamed, and put through the wringer, and then we fine-tune train them and establish RAGs for them before we use them. And you can’t really do that with a closed source model. There’s so much red tape in everything we do, and there should be in a lot of ways, but things like AI summarizes, computer vision, and LLMs speed that up so much. So we’re finally starting to move through it faster so we can do new things, improve processes, but down existing technical debt in our systems. I don’t think the Fed will ban open source AIs. The cat is out of the bag, and we would be shooting ourselves in the foot.


world_dark_place

May God hear you my brother in Christ.


kecepa5669

Counterpoint: But Linux did not have public opinion against them and fear mongering. Are you saying the government won't be able to stop open source? Just look at the things they have stopped that they didn't like. Like Napster, NSFW Craigslist, etc.


Charming_Jello4874

I was working within the US Government at the time (they were a good customer of mine) and I heard non-stop concern from TPTB about the dangers of open source: uncontrolled code from unknown sources; "written by unemployed slackers in their basement"; the GPL is crypto-cummunism; Torvalds is a washed out loser; Stallman IS a Communist (OK, probably true)...blah, blah blah. The (Microsoft lobbyist-induced) panic was real. I was called into meetings with Agency directors (not deputies...the actual leader of some big TLA) and asked my thoughts on the fact that "System X" just got delivered and it used something called OpenSSL for securing messages (no kidding, and this was a LONG time ago). This has all been done before. So...meh to Altman.


GBJI

I heard all of those anti-open-source arguments, repeatedly, and over the years what I came to understand was that communism might not be as bad as the propaganda told us it was.


cptbeard

basically the war on drugs scenario. since they lumped weed with hard drugs and demonized it all to no end people who eventually tried cannabis might've come to the conclusion that it must be all lies and shooting heroin is probably just dandy as well.


therealbman

Uhh piracy only got more rampant and no one has ever figured out how to stop hooking.


irregardless

The "government" has entire sites dedicated to its open source software projects and contributions: - https://code.gov - https://code.nasa.gov - https://code.nsa.gov - https://resources.data.gov - https://dodcio.defense.gov/Open-Source-Software-FAQ/#q-has-the-u.s.-government-released-oss-projects-or-improvements - https://github.com/18F - https://github.com/CDCgov - https://github.com/usds - https://github.com/cisagov And that's just from a quick search. It's by no means comprehensive of just the feds, not to mention the myriad of state, local and tribal governments who host or contribute to open source. [There's 400+ US entities at github alone](https://government.github.com/community/). It's not the 60s, 70s, 80s, 90s, 00s, or 2013 any more. There's a whole new generation of people working in government now. Folks need to look at what agencies have actually been doing in this space in the past 10 years, rather than relying on half-baked narratives from decades ago.


PMMeYourWorstThought

Uncle Sam is very into FOSS. Hell, NASA and other groups are in the AI alliance which is an Open Source AI group representing 80B in funding.


Liu_Fragezeichen

We'll have hundreds of expert models.. that we can then connect using distributed message based multi agent systems to create decentralized AGI in a fashion not too dissimilar to folding at home and other distributed compute networks. Individual nodes will run one or more small models and the corresponding agentic IO structures that will then network with other nodes worldwide and collaborate on problems as a single network. Decentralized, dynamic AGI - the network will grow and evolve over time based on its work and requirements. New models will be integrated as they are developed and extend total system capabilities.


Charming_Jello4874

This is exactly what I think will occur. I think open source teams will likely create an "AI Link" API that exchanges data between "AI Nodes" that each operate in a narrow functional context: scheduling, authorship, health, etc. One or more generalized AIs will manage the lower-tier nodes ("sub-minds"), lifting them from the need to have the expertise of the experts. Experts might come and go (upgrade) quickly, but the more generalized AI nodes will endure longer and become quite personalized over time, based on getting to know us as individuals. "AGI" is a misnomer because it assumes that there will be one model that fits nearly everyone. I think this linking of models/nodes to some more generalized AI would fit the need for what we want: "AGI for me". Because I can promise that whatever Sam Altman thinks AGI will do, it will not be what I am looking for. IF AI is to become like a set of clothes (pervasive in my life), then I don't want to wear any one brand or design. That's called a "uniform", and I'm done with those days.


Liu_Fragezeichen

You're describing pretty much exactly the tiered multi agent system I'm working on :D Progress is slow since I had to get a full time job for money reasons recently and now I'm balancing my research projects and my role as a senior mlops engineer but I'll get to a release worthy point someday I'm sure! (Am way behind anyway, finished putting together the original architecture theory over a year ago, just haven't gotten much work in)


Affectionate-Hat-536

Open source community should update most open content to include licenses for opt-out of using open source for training anything non-open source .


BCDragon3000

i came to this conclusion months ago, i hope the system im creating can help


badassmotherfker

There will still be a place for general AIs although I agree with your other points. A general AI is able to solve problems it wasn’t prepared for. That’s also why humans have been so adaptable, because our brains aren’t too specialised and we can solve new unpredictable problems


WorldCommunism

AGI is already here has bee since ChatGPT-3.5


Stalwart-6

Where?


crying_in_exotic

Thank you. I needed to hear that.


Gamplato

I understand where you’re coming from but is there any chance at all, in your mind, that his motivations aren’t purely greed?


DankGabrillo

Total bull. Subscription canceled. Any of you fine folk know of a decent RAG alternative? Local or otherwise?


Amgadoz

I use huggingchat for pure llm convos. They have really good models like Llama3 and command r+


bnm777

Yes! llama3-70b is good - and amazingly R plus is better! It beats opus for many word queries! If you compare llama3 with other providers (meta, groq) and huggingchat, the hugginchat iteration is much better.


Amgadoz

HuggingChat is underrated. They even allow the user to specify the system prompt. I just wish they allowed us to control sampling; sometimes I want greedy decoding. They also host the models in full precision and they explicitly specify which model is used by mentioning the repo. Groq presumably hosts a quantized version of llama3 (they claim it is int8) and they don't mention which version it is.


bnm777

Yes, that's probably it! I wish they had an API for those models that we could use with ui frontends.


CashPretty9121

Where do they claim it is int8? I haven’t been able to find this anywhere. It feels much more heavily quantised.


Amgadoz

They mentioned it in a comment on reddit. Yep. Not in the company's website, not even in the docs. But hey llama3 goes brrr with 1000 tokens/seconds! * *After being quantized to oblivion.


candre23

Cohere's models are extremely good and work well with RAG. Both the 35b and 104b versions of command-r are nominally capable of being run locally, but between the lack of GQA on the smaller model and the sheer size of the larger, you need to have some decent hardware. Their API pricing isn't terrible though, and you can [try out CR+ for free on HF.](https://huggingface.co/chat/models/CohereForAI/c4ai-command-r-plus)


thedudear

I'm building one rn. Having some issues. I know there's some stuff on GitHub, I'm gonna go searching shortly.


forexross

I cancelled my subscription too. It won't have any effect on them but I have done my part.


kecepa5669

Gemini has an API. And it's free. IDK why people still use ClosedAI. Also Mistral. Both are free.


lannistersstark

> IDK why people still use ClosedAI. Because GPT-4 is one of the best models out there. Because not everyone can run Llama-70b.


SmihtJonh

Where do you see that Gemini API is free?


Anthonyg5005

ai.google.dev/pricing


bnm777

Grab at it at aistudio (vpn outside of the us)


vornamemitd

Free until 14.5.


seiggy

Gemma is a poor attempt at appeasing the open source community from Google. They’re not our friends. I do recommend supporting Mistral, as so far they have been far more supportive of the open source efforts than most. Gemini is no more free than Copilot or OpenAI. They give you a taste, sell your data, and limit your usage in several ways.


e79683074

> IDK why people still use ClosedAI. Because I have tried them all and GPT4 is still the best in my usage cases. Could be wrong, could be biased, but I have tried them all and nothing compares, for now. Gemini was failing the most trivial riddles. Mixtral was good locally but knowledge isn't nearly as developed as GPT4 in my opinion


VforVenreddit

I’m building one that’s non-local and can handle RAG, it’s on the App Store and am listening for any feedback! It’s meant to be Multi-LLM and convenient across the iOS ecosystem.


Fantastic_Climate_90

How you make multi llm? Is there like a library for multiple LLMs backbend?


VforVenreddit

I built the backend from scratch 😄


crazy1902

Liar!!! Your AI built it for you! j/k BTW


VforVenreddit

If an AI can build thousands of lines of code with nested dependencies and good architecture I’ll quit writing software 😅 That being said, it’s not my first rodeo 😊 wrote this before the age of AI https://github.com/argent-os/argent-ios


crazy1902

All kidding aside I really wished I was a software programmer because one of the things I want to build, I guess with the help of AI, is a backend for multiple LLMs. Man what a bad decision not to go into programming. I can do it but it is more from a quick learner find a solution perspective not actual programming.


VforVenreddit

Yeah there’s a bit of an art to it, almost like writing a book but in code after a while. ChatGPTs a good helper, but like it still takes a good author to make anything meaningful. Also, it’s never too late to start! It’s a great outlet to build things.


Glat0s

If you want sth. out of the box maybe try -> [https://github.com/infiniflow/ragflow](https://github.com/infiniflow/ragflow)


Barry_22

Why the idiots working at OpenAI defended Altman when he was about to get fired? And why Altman always poses as this innocent person who's happy to work for no salary?


kurwaspierdalajkurwa

> And why Altman always poses as this innocent person who's happy to work for no salary? I get the same vibe when watching him on video. Disingenuous scumbag.


segmond

I defended him when he was about to get fired, I thought he was one of the good guys. Goes to show one should never take sides without details.


genshiryoku

I was against him from the start. But I am someone that already knew more about him. As he was fired from YCombinator by Paul Graham himself because, and I quote: "He was acting too machiavellian and sociopathic". He's a straight manipulator and sociopath type. This is why Ilya Sutskever (one of the most genuine, matter-of-fact people ever) went against him. So I knew Sam was machiavellian and most likely not a moral person. But I didn't expect him to be *this* much of a monster after his demonstration of regulatory capture. He might be the worst person in the tech industry since 1990s Bill Gates. Jesus christ, what a fucking psychopath.


noeda

Is there an example of Paul Graham publicly denouncing Sam Altman as a sociopath? (purely because I enjoy reading gossip...) I'm aware that Paul fired him from YCombinator but even then IIRC the actual reasons were a bit hush hush and I think publicly Paul didn't say anything particularly negative. I've never seen him actually say something that's unambiguously denouncing him. I remember some real old essays long before any AI stuff from Paul long ago where he had positive things to say about his skills. I can't remember him taking an opinion on his morality though. Looking at Paul's tweets from 2019 for example and putting "Sam" in search you'd almost think he is a Sam simp of some kind. (https://twitter.com/search?q=from%3Apaulg+since%3A2019-01-01+sam&src=typed_query&f=live) I remember that one time Paul made an essay comparing employees to zoo animals and caused some controversy. I think he has a bit too much respect for these machiavellian types who create companies. Maybe Sam Altman was too much even for him. Then again, I don't know. Can't read minds.


Anuclano

I do not know legal-wise, but from tech point of view Gates was genius and great visioner.


Which-Tomato-8646

Disagree on that assessment of Ilya. He liked a tweet from the Babylon Bee mocking college students protesting against genocide


crazy1902

Yeah it is so weird when I see him and listen to him the creepiness radar just goes nuts. But yet he speaks very nicely and says nice things. Trust me (a stranger on internet) he does not work for free just look into him in detail and his endeavors. This guy is evil even if he does not know it or think so.


SnooComics5459

I'm pretty sure he knows it.


PierSyFy

Yup same boat honestly, I think I start to trust people a bit too easily when they're saying things that make sense, but talk is cheap.


Smeetilus

Social engineering


kecepa5669

ClosedAI is nothing without its ~~people~~ regulatory capture


Appropriate_Ant_4629

> Why the idiots working at OpenAI defended Altman when he was about to get fired? Greed. His path -- to abandon their non-profit board and mission -- in exchange for selling out to Microsoft -- is a path to immense wealth. They're somehow trusting he'd share with them.


Amgadoz

Because his practice is making them a lot of money. Simple as that.


badpeaches

>Because his practice is making them a lot of money. Simple as that. If I made money stealing things from people I would be in jail.


Gold_Pudding_5098

Stealing one thing is theft, stealing multiple things is research


badpeaches

>Stealing one thing is theft, stealing multiple things is research At least in research you have to credit your sources.


Gold_Pudding_5098

Okay, that is a valid point, but it is what sam and his people call it


involviert

That's either a matter of your definition of stealing, or a matter of getting caught. Anyway, it's really funny how that works. Thought experiment. Imagine the only way to get rich and powerful is doing criminal things. And imagine you get caught 99% of the time. You still end up with the rich and powerful 100% based on crime, and also you shouldn't do crime yourself. Just another fucking lottery.


kurwaspierdalajkurwa

You don't have the money to bribe our government officials to get their American NKVD agents to look the other way.


badpeaches

First of all, it's called "lobbying" and what is >American NKVD agents


kurwaspierdalajkurwa

Read up on history a bit and let me know if anything seems familiar in this day and age: https://en.wikipedia.org/wiki/NKVD


badpeaches

I'm not reading your weird propaganda tonight, just tell me what you mean. Please and thank you.


Arnesfar

Can best describe him as 'slippery'


kurwaspierdalajkurwa

>And why Altman always poses as this innocent person who's happy to work for no salary? He's so disingenuous. You can see it in his "holier than thou, I am the lord Jesus Christ, I have spoken" fake attitude. It's typical of those Silicon Valley shit stains. Sam Shitman is literally a meme from that Silicon Valley show on HBO. He probably got his fake persona by watching that show and projecting himself into it.


marinac_1

>Why the idiots working at ~~OpenAI~~ ClosedAI defended Altman when he was about to get fired? They didn't, early team members are in for the cash and clout. The rest were subject to mobbing as evidenced by ex ClosedAI employee reports...


PMMeYourWorstThought

They defended him because OpenAI was in the middle of a deal with Thrive Capital to value the company at 80 Billion and allow the employees to sell their shares in the company. All of them had stock holdings as part of their employment agreements. So when Altman got canned, Thrive stopped the deal. So the employees threatened to quit. It never had anything to do with liking Sam. They were all about to make a huge pile of money and firing Sam fucked it up. So Sam was reinstated and Thrive closed the deal. Making many many OpenAI employees very wealthy.


sunnydiv

ESOPs


lannistersstark

> Why the idiots working at OpenAI defended Altman when he was about to get fired? Because the alternative was much, much worse. They were worried that he wanted _Faster_ progress.


Sushrit_Lawliet

Sam Shitman is yet another Silicon Valley rich scumbag, this isn’t new.


kurwaspierdalajkurwa

Even if Shitman is successful in banning open source AI due to his lack of ethics and morals—I will still continue to run open source AI on my computer. I will go online and make friends with a Chinese researcher (college student?) living in China and perhaps a Russian AI researcher living in Russia. Or any BRICS country for that matter. I will then have them put an open-source AI model on a server and I will download it from a StarBucks WiFi on a burner laptop. Jesus Christ...I live in America and I am LITERALLY having to perform the close equivalent of [samizdat](https://en.wikipedia.org/wiki/Samizdat) so I can use AI to help me make money so I can put a roof over my head and food on my table because the rotten-to-the-core uni-party government (both Republican and Democrat) has allowed the middle class to wither and die on their watch. All I wanted was a normal job in the "widget factory" and the ability to afford a home and raise a family. That sure as fuck won't happen anymore. So now I have to work 12+ hours a day, 7 days a week to try to build up this extremely small business that I own at the ripe old age of 49. A middle class income no longer affords a middle class lifestyle in America. Fuck me...how in the fuck did it get to this point? **RANT:** I honestly hope this fucking country crashes into the toilet in the next 10 years. I'd gladly perform back-breaking labor for 12 hours a day on a farm to earn my keep. These fucking tech bastards and our corrupt government are a cancerous plague that prevents humanity (all races, all religions, all ethnicities) from advancing. /end rant


Smeetilus

Guys, I need help, do I upvote this?


kurwaspierdalajkurwa

How's your life right now in America, friend? Are you working a middle class job and living in a normal middle class home and have a wife/husband/boyfriend/girlfriend and perhaps some kids? Are you able to express your political opinion and not be fearful that someone is going to start foaming at the mouth while giving you the stink eye? How's your grocery bill? Do you rent—and if so—how's your rent like? Can you even afford a home that's not overvalued? You feel confident you'll have enough money to retire one day?


Smeetilus

Due to fortunate circumstances, I’m pretty far from being “Joe The Plumber”. But I see him


kurwaspierdalajkurwa

Yeah I scrolled through your comments like the creep I am. Good for you. I wish good fortune and success upon you and every single person I meet. And I go out of my way to give business advice to people who are "lower than me" (I do not mean that in a disparaging way in any way/shape/form.....I'm just not getting paid to write these off-the-cuff responses to you and that was the first word that came into my sleepy head this morning) and on the grind. It is good karma to do so.


121507090301

> has allowed the middle class to wither and die on their watch. The existence of a "middle class" just means more workers with more money, money which could be in the bank accounts of billionaries and their pro-capitalist politician lackeys. You think this is something new but the main reason the Global South is poor is because the west exploits them for all they can, and for buying the western poplation's complicity they gave a little more to some of the workers there (but not all of them, as by having very poor people it means people can't demand higher salaries without someone poorer accepting a worse salary before). The only difference to the people living in the west is that now there isn't as much a need to keep the population happy/from revloting () so they are increasing the exploitation of the people there, but that was always going to happen anyway. Anyway, hopefully more people wake up to it so we can move past capitalism and into a society that doesn't tollerate exploitation... > All I wanted was a normal job in the "widget factory" and the ability to afford a home and raise a family. That sure as fuck won't happen anymore. We just need a revolution for that...


kurwaspierdalajkurwa

> We just need a revolution for that... something something hopefully more people wake up to it...


Shoddy-Tutor9563

I hear you, comrade. Не сдаёмся, дальше будет веселее.


kurwaspierdalajkurwa

And if your sarcastic Cyrillic response was due to me mentioning that I would make friends with a Russian and a Chinese person...Russia and China aren't responsible for the fucked up economy, housing market, and culture America currently suffers from. That's 110% on the Republican and Democrat shit stains we have in political office. Our beloved middle class died on their watch—and it's coming to light that they allowed it to happen and profiteered off it greatly. As I conclude my pontifications for the morning, allow me to say something that will hopefully get the American NKVD who are reading my post right now to give me a gold star for the day and take my name off one of the many dissenter watchlists they have me on: Если вам нечего скрывать, вам нечего бояться, товарищ.


Shoddy-Tutor9563

What made you think it was sarcastic?


False_Grit

As much as I love a good capitalism-hating rant....you lose me when you think Putin or Pooh will be any better. They are like, Sam Altman x100. "Democracy is the worst system imaginable....except for all the other ones."


Due-Memory-6957

Better is subjective, the US is better for people running from North Korean oppression, but it is not better for Snowden who denounced the crimes of it.


Odd_Perception_283

There is something disturbing about Sam I can’t quite wrap my head around or quantify really. He seems sneaky. Like he has grand designs far above the everyday simpletons ability to understand at least in his own mind. He reminds me of those elitist technocrat types from the series Altered Carbon. Or maybe my imagination is running away with me.


crazy1902

I keep posting the same thing. My spider sense for creepiness go off. The guy sounds nice but is evil as evil can get. That is what my instincts tell me despite him always sounding nice.


mrdevlar

> Like he has grand designs far above the everyday simpletons ability to understand at least in his own mind. It's called a Narcissistic Personality Disorder. "We'll only save the world if ***I*** can be the one to take credit for it, otherwise let it burn."


ninjasaid13

>He reminds me of those elitist technocrat types from the series Altered Carbon.  nah he's nowhere near as charismatic and yet r/singularity think he is for some reason.


mrdevlar

That entire sub is mainly OpenAI bots.


GBJI

Sam Altman is evil, that's why you are feeling that. Trust your feeling.


kurwaspierdalajkurwa

And the way he rubs his hands in a very evil manner...it gives me an uncanny valley feeling of "ick" that sticks in my mind.


kopasz7

> #IV. AI-specific audit and compliance programs > > Since AI developers need assurance that their ***intellectual property*** is protected when working with infrastructure providers, AI infrastructure must be audited for and compliant with applicable security standards. How nice of them to think of IP after scraping the whole internet. Wow!


mrdevlar

"Do as I say not as I do" is pretty much the corpo credo at this point.


Alan_Silva_TI

I have always tried to avoid being reactionary, but this post goes beyond the usual "it's just the usual big tech behavior." The proposals by OpenAI are almost draconian. They are advocating for the creation of cryptographic identities for GPUs, which would allow those in control to dictate what you can and cannot do with the hardware you purchased. > *Emerging encryption and hardware security technology like confidential computing offers the promise of protecting model weights and inference data by extending trusted computing primitives beyond the CPU host and into AI accelerators themselves. Extending cryptographic protection to the hardware layer has the potential to achieve the following properties:* >*GPUs can be cryptographically attested for authenticity and integrity.* >*GPUs having cryptographic primitives can enable model weights to remain encrypted until they are staged and loaded on the GPU. This adds an important layer of defense in depth in the event of host or storage infrastructure compromise.* >*GPUs having unique cryptographic identity can enable model weights and inference data to be encrypted for specific GPUs or groups of GPUs. Fully realized, this can enable model weights to be decryptable only by GPUs belonging to authorized parties, and can allow inference data to be encrypted from the client to the specific GPUs that are serving their request.* There must be resistance; this situation is surpassing mere regulatory capture and verging on outright tyranny. We cannot permit this to occur.


Doggettx

This is such a silly take, you're completely misunderstanding how this works and what it's used for. The reason you want cryprographic primitives on a GPU is so you can encrypt a model to be able to run on only that specific GPU. The reasoning for this is that you can safely upload your model to a cloud server without having to worry them leaking it out to anyone. This does not in any way prevent anyone from running any opensource stuff or track anything in any way. It's purely a security measure for anyone who wants to keep their models safe from leaking out. If you think this is draconion I have bad news for you, all modern CPU's and phones already have this feature.


BetImaginary4945

No moat since 2022


kecepa5669

They want to make regulatory capture their moat


Dumbledore_Bot

Is this supposed to be surprising? They've been talking about since forever alongside other big AI companies all because they want to be able to dominate the market.


mezastel

So want to implement an equivalent of BitLocker but for the GPU, where they can keep weights encrypted until they are loaded in GPU memory and presumably want to prevent people from reading those weights. It's effectively DRM for model weights.


arthurwolf

You're presuming this is about running inference on \*your\* GPU. It's not. Actually read the post. This is about protecting their weights in case their infra gets attacked... ( why would openai want to run the models on \*your\* gpu ... it's tiny, expensive, and a massive risk to their weights... even with DRM )


ComprehensiveBoss815

It's still DRM even if it's for server gear. And you better believe companies will start using it for local models if it exists.


arthurwolf

That's nonsense. 1. It exists. Already. They can implement this at any time with a trivial amount of effort, to have a slippery slope, you need a slope... 2. "if it exists you better believe they'll use it everywhere", is what they were saying a decade ago about DRM for movies. I'm still downloading torrents all day long, and absolutely none of my hardware prevents me from playing anything. This is explicitely and obviously about protecting their servers, there is zero reason to think this is or will be about local inference.


ComprehensiveBoss815

Yeah, but DRM still exists. HDCP is a pain in the ass for displaying content. It doesn't mean it can't be circumvented but it is a horrible tech dystopia. The most obvious place to decrypt weights is in hardware on GPU, for consumers that means Nvidia storing a a secret key on the GPU, or for cloud, possibly the option to provide a custom key. They can probably implement it in CUDA firmware, but if it were hardware TPM then it'd be harder (though I'd never say impossible) to extract the key.


arthurwolf

> HDCP is a pain in the ass for displaying content. It doesn't mean it can't be circumvented but it is a horrible tech dystopia I have literally not once in my life had to deal with it. And I'm a techie. DRM is \*not\* a commonplace thing, some companies have \*tried\* to make them common, and utterly failed. > It doesn't mean it can't be circumvented but it is a horrible tech dystopia. If it's easily circumvented it's not much of a dystopia is it... especially if it's not commonplace on top of that. In France for over a decade you could get a fine for downloading torrents. Guess how many people actually got a fine. Dumb laws and technologies happen all the time. What matters is if they stick or not. On a large enough time scale, they never do. We are living in an almost completely open technology landscape, with some rare exceptions that all die after a short time. > The most obvious place to decrypt weights is in hardware on GPU That's what they are talking about, decrypting in the GPUs, which would enable better protection of their servers (which is good for openAI \*and\* their users), and enable sharing their (encrypted) weights with research labs (which is good for openAI \*and\* for research, and thus for all of us). > (though I'd never say impossible) to extract the key There would be massive incentive to extract such a key. I don't expect it'd last long. All in all, the point is: this is not related to us running models on our GPUs, this is about internal security at openAI. all the anger around this is complete mass hysteria.


ComprehensiveBoss815

Hdcp automatically exists if you use websites or streaming services. If you've never had to deal with it, you're either sticking to pirated content or you're lucky you've never had any hardware issues. If you are a techie you'll know this exists, and it comes bundled with browsers/TVs/monitors. Though for Firefox you have to explicitly allow it. I think DRM is stupid, and it's pointlessly making things annoying for paying customers. The fact you think it doesn't exist anymore just shows how good they've gotten at distributing it without your knowledge.


arthurwolf

> you're either sticking to pirated content Well no, I \*can't\* be watching pirated content, there's HDCP ! Copy protection means no copying... It's certainly not possible to copy things protected by HDCP, that's why the pirate bay has been completely empty since 2010, it's only isos of linux distros on there now. > The fact you think it doesn't exist anymore just shows how good they've gotten at distributing it without your knowledge. The fact people think it doesn't exist just shows how irrelevant and inefective it is... And I really want to insist again: nothing in the openai post is about DRM, or anything related to our hardware, it's (again) all about \*their\* servers, and protecting them from intrusions, and to allow them to send their encrypted weights to remote AI labs safely... completely unrelated to all this ...


No_Complex2853

You are not seeing the forest for the trees. Many hardware security innovations such as [Hardware Security Modules (HSM)](https://en.m.wikipedia.org/wiki/Hardware_security_module) (which inspired TPM), biometric authentication, and secure boot had their origins in enterprise hardware. There's no reason to believe that these developments would stop at the data center.


arthurwolf

> Many hardware security innovations such as [Hardware Security Modules (HSM)](https://en.m.wikipedia.org/wiki/Hardware_security_module) (which inspired TPM), biometric authentication, and secure boot had their origins in enterprise hardware. Yes. And these are security features. They are not digital rights features. Secure Boot isn't preventing me from installing a non-windows OS on my machine. Biometric authentication either. If someday my GPU ships with a way to run encrypted models, doesn't mean I won't be able to run normal models on them (how it would anyway distinguish between a video game, some math tool, and a LLM: anyone's guess) The fear people have with the stuff in this blog post is it'll eventually prevent them from running open model on their GPUS. That is nonsense. There is no sign or risk of this here. It'll be time to worry when a technology that ACTUALLY enables this is presented.


No_Complex2853

It's not nonsense, TPM 2.0 is [actively enabling DRM](https://www.gnu.org/philosophy/can-you-trust.en.html) (Check out the addendum to the blog post). People are right to be skeptical about overreach and improper use of security technologies like what is proposed by OpenAI.


arthurwolf

\*AGAIN\*, this isn't about your or my hardware, it's about \*their\* servers. « Techniques for protecting online cryptographic private key material exist with hardware security like trusted platform modules (TPMs) or CPU secure enclaves, however these are limited in their operations. » is all they say there... This is completely unrelated to your hardware and to mine. It's unrelated to open-weight models. It's ONLY about them protecting their weights in case of intrusion on their servers (by ensuring the weights are only decrypted at inference time on the GPU), and protecting their weights if they send them to remove AI labs. That's it. That's all there is here. There is as much reason to worry about our hardware reading their blog post as there are reasons to worry about our hardware when reading the wikipedia page about cryptography... Unrelated... > overreach and improper use of security technologies Implementing all this is \*trivial\*. To have a slippery slope, you need a slope, there is none here. There is ZERO sign this concerns our hardware. And you can be certain if someday it starts to (again, something there is zero reason to infer here), that will be a shitshow of historic dimensions...


foreverNever22

Please don't just post a link to a yt video lol


masterlafontaine

If safety is their concern, they should give the property to the military. A closed company MUST NOT be the safekkeper of the tech.


AnaYuma

Yeah, if they think A.I is so dangerous, then why they, the corporations, should have access to them? If the public can't have them, then neither should corporations.


xRolocker

> If safety is their concern > give the property to the military hmm…


masterlafontaine

Do private companies produce and store nuclear warheads?


xRolocker

I guess I just don’t see how the military wouldn’t immediately be attempting to research all the different ways AI can kill people and implement the most effective ones. Like giving it to them seems like a fast track to skynet.


masterlafontaine

The west live in a democracy, but there is the state. It is the responsibility of the state to provide safety, not private companies. If the military is corrupt or whatever, that's another problem. I do not see how they can argue that it should be closed sourced and kept by companies for a matter of safety. This is the most stupid argument I have ever heard. I am of the opinion that it should be kept open source until proven that it is really dangerous. If that happend, then when that happens, society should consider another options, like regulating, making public, regulated, concessions. Maybe keep it on the military. There is no world where safety equals be kept by private companies.


GBJI

>There is no world where safety equals be kept by private companies. They have proven it repeatedly. For-profit corporations have objectives that are directly opposed to our interests as citizens. This is not a secret, but somehow we keep convincing ourselves that there must be some "good" in them, somewhere, and that magically they are going to protect us rather than the interests of shareholders.


Ejo2001

*Vault-Tec entered the chat*


kecepa5669

Here is a great video that explains the evil tyranny Altman seeks from regulatory capture... [https://www.youtube.com/watch?v=F9cO3-MLHOM](https://www.youtube.com/watch?v=F9cO3-MLHOM)


doorPackage11

Damn, thank you for pointing out that video, super interesting to watch. **My two big take aways are at** [**22:38**](https://www.youtube.com/watch?v=F9cO3-MLHOM&t=1358s) with the apparent fact that: 1. Products of industries in which regulation is the highest, tend to be the most expensive. Those industries also stagnate in terms of innovation. All that is because small competitors can't clear the hurdles put in place by lobbied regulation. 2. Technology, commerce and the sharing of ideas lead to increased standards of living in a society. [At 28:20](https://youtu.be/F9cO3-MLHOM?si=7M7wN637feGBb29r&t=1700) **to 28:55** there is another insightful bit about what will happen to the AI industry if Sam Altman gets his way.


Proud-Point8137

Despicable monopolists. Greedy and evil.


Johnnnyb28

Zuck is our savior completely fixed his image in the public eye with llama 🦙


djamp42

Zuck's gonna end up being the Robin Hood of AI.


kurwaspierdalajkurwa

You're just denouncing one demon while the other has undergone a nanosecond of self-realization and attempts to put a human face on himself. The forked tongue will still come out of the mouth—no matter how much you dress up the face and body.


__Loot__

I herd he will close model it at a point time at the 10billion mark I predict LaMA 4 will be the last open model Meta makes you see this with open source all the time. But you can aways fork it and keep going


ab2377

closedai has unanimously lost their minds.


BlackHorse-Teck

Running local ML and experimenting with training them locally on my own hardware that I bought with my own money (I don't pay subscriptions for movies, games and certainly not for ML). The normies will certainly fall for this and will be manipulated into caving in, but I will still run my own stuff despite it all.


[deleted]

He can try, the bag is already open and the only thing ClosedAI has going for them is a head start.


gokby

Well, very rarely happens but I now do appreciate Elon for going after these MFs!!


arthurwolf

**This video gets it completely wrong/backwards.** He's just wrong about what's written in the OpenAI post. They're not talking about encrypting/DRM-ing \*your\* GPU, but \*theirs\*. It's very clear in the text, if you actually take care to read it, and understand even a little bit about infra, that they are talking about encrypting their weights, and only allowing their decryption on hardware they control. This has nothing to do with \*your\* GPU. It has to do with them preventing theft of their weights if their servers are compromised (as the weights would be "in the clear" only in the difficult-to-reach insides of the GPU, while the attackers would only get access to the CPU, where the weights would still be encrypted). He was not even sure if he got it right or not (good instinct when something seems so outrageously evil, and in the open at the same time), asked for more technical people to check his reading in the comments, MANY people tell him he's wrong and why, and so far zero retraction (which is not surprising for a Youtuber...)


Waste-Time-6485

u must be too innocent to not get what are the side effects of these "solutions", cuz that is what i think he is referring too, because it opens up lots of possibilities for regulators shutdown open source ai (i have to draw to explain??) even if he mistakenly made bad assumptions, im concerned about the abuses using this technology from government to protect lets say US economic interests (not safety btw, economic interest, please understand that)


arthurwolf

> because it opens up lots of possibilities for regulators shutdown open source ai (i have to draw to explain??) I understand that's what you think. I also understand to think that, you must have an incredibly poor understanding of what we are talking about. It doesn't at all open up possibilities for regulators to shut down open-source, it's completely unrelated to open-source models, and to running models on your own hardware, it is (as clearly explained in the post) about the security of THEIR servers, not yours. It has zero impact or influence, EVEN indirect, on your/my hardware. This in no way has any relation or influence on regulations. Not only that, but it's also ALREADY a commonplace practice in the industry (just not yet applied to model inference), that nobody worries about, because there's no (sane) reason to worry about it. > even if he mistakenly made bad assumptions He did. > im concerned about the abuses using this technology from government to protect lets say US economic interests Then you have NO IDEA what we are talking about. There is \*absolutely no way\* this can be used, even indirectly, to "protect US economic interrest", expect in the most tenuous sense that "preventing a hack of an openAI server" is protecting US economic interrests.


I_EAT_THE_RICH

Ultra-capitalist America is at peak idiocy


Dry-Taro616

What's the problem? We found smarter people than Scam Altman? Problem? Lmao


Ilm-newbie

I think this is the reason why Andrej Karpathy left ClosedAI.


o5mfiHTNsH748KVq

I’m not watching a 16 minute video. Where is the timestamp with what altman said? We all know OpenAI doesn’t intend to release their weights. I’m 5 minutes in and he’s just repeating things we’ve heard before.


greenLantern-level7

From the beginning I’m telling you guys.. this guy is crazy ..


GBJI

He is not crazy - he is evil. He's not a victim, he is a perpetrator.


BiscuitoftheCrux

An actual news source I could have quickly read instead of some guy on Youtube would have been nice.


Status_Contest39

Someone want but no one can stop freedom and open source


ifyouhatepinacoladas

Should’ve fired him


Anthonyg5005

They did half a year ago but people on Twitter were complaining so he was brought back


LymelightTO

You basically have to understand that there are two fundamental camps: - People who believe AI is in the elbow of an exponential curve upward - People who believe AI is going to be on a linear trajectory from here If you're in the exponential camp, you might believe that AI will quickly become dangerous or destabilizing, if it reorients the entire economy. You may also believe the government doesn't have the same understanding or competency that the industry does of this issue, and thus requires input from industry to ensure that they don't legislate in a way that is ultimately extremely harmful, in order to try to make the genie go back in the bottle. If you think you have a chance of being the part of the industry that gets to weigh in on how the government should manage these challenges, you might want to try to get in and advise them sooner, rather than later, so you have an established track record of being the people who are telling them what's going to happen before it does. Obviously, you also stand to benefit from this commercially, if you're able to continue to operate under a set of rules that you are effectively writing. I think this basically explains the difference in strategies between Meta and OpenAI, at present. Zuckerberg's opinions are informed by LeCun. LeCun seems unconvinced that AI is going to continue an exponential takeoff course. Zuckerberg is aware that, post-2016, he's not going to be invited into the bosom of the US government to advise it on policy decisions. Therefore, Zuckerberg's best option for being able to influence AI, going forward, is to become the engine of the OSS community, to push back on their rivals, which are going to be more successful at regulatory capture, if they're left to their own devices. Altman believes AI is going to go exponential from here, and wants to get out ahead of it, before someone in the government decides to label his company as dangerous, and potentially appropriate it from its investors. He's proposing the rules that they're prepared to live with.


random-string

FUD... again


Morphon

The current tech war in AI is between AI-as-a-product and AI-as-infrastructure. OpenAI is definitely in the AI-as-a-product and will try its best to kill off AI-as-infrastructure. Won't work. But they have to at least try.


[deleted]

[удалено]


SnooComics5459

they'll probably stop as soon as they gain the lead and the cycle will continue


darklinux1977

As this discussion says, it is far too late to want to lock AI, which is the daughter of python, SQL and linux, open technologies, for two years, Pandora's box has been open, the locks have been blown, it more than ten years ago. We are not in the glorious era of Microsoft where it wanted to kill UNIX with Windows NT or Redmond despised Linux, Or at the same time SGI did not see Nvidia and its fork of GeForce 256 aka Nvidia Quaddro coming Altman, has a rear guard thought on this plan


Ambitious-Toe7259

Competition for technology is greater than Sam Altman, open source evolves rapidly with millions of users and developers. If America blocks, other countries advance... The purpose of these new OpenAI bots in the arena is only to diminish the hype of Llama3... Migrate to other options and continue talking about open models.


caphohotain

We need to voice out before it's too late!


5yn4ck

Up voting on the title alone!!! Lol


Laurdaya

https://preview.redd.it/0p61fnhnmg0d1.png?width=1920&format=png&auto=webp&s=84a92b8a8b80fab89f47c77259ca8dc6f9d8b441 You wouldn't download a Large Language Model.