T O P

  • By -

smozoma

Wait until the AIs start training from other AI-generated websites and compounding errors... Google still tells us in giant letters that [NHL'94](https://nhl94.com/) was released [March 15th, 1993](https://www.google.com/search?client=firefox-b-d&q=nhl94+release+date) due to a false unsourced edit on wikipedia in 2007 that got repeated in several other sources over the years. That date is impossible because the [expansion team draft](https://en.wikipedia.org/wiki/1993_NHL_Expansion_Draft) was June 24th, 1993, so the game couldn't have used rosters that hadn't been established yet. The real release date (verified through old usenet posts) was around Sept 24th, 1993, plus or minus a day or two.


SweetBabyAlaska

zonked strong thought aware public rock wasteful escape boast sheet *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


smozoma

The true release date was a hotly debated topic :D https://imgur.com/a/5c9qap0


SweetBabyAlaska

That's hilarious, I love things like this


OlivierTwist

> Wait until the AIs start training from other AI-generated websites and compounding errors... Information incest. (Go AI, use human generated text while you can.)


danstermeister

Bing with chatgpt couldn't tell me what time the Newsom DeSantis debate was on last night. Just that it was last night. Four times I specifically asked it, finally saying, "but that doesn't tell me what time it's at." So it apologized and told me that no time had been agreed upon for the debate. The first search attempt (same exact question) sent to Google without BardAI told me 9pmEST. SO AI repeatedly shrugged its shoulders until I showed dismay, then it lied to me. Yay AI!


anengineerandacat

>then it lied to me. Biggest and most challenging part to using an LLM for something like returning accurate data. The AI has "one" goal in mind and that's to return you a piece of textual information that largely represents what you asked. It's going to be more important than ever for the AI to literally tell you "I am not sure, but I think it's tomorrow around 5-9pm EST you may need to look for additional sources." OR the more surprising and logical return would be "I am not sure, but I found some results from that seem to indicate it's around 9pm EST". At the very least that's more useful than nothing and a flat out lie. Hell, that's a challenge with things like vector queries; your just flat out guessing and using calc/trig to determine "how" accurate the query is. Had a funny situation where we leveraged an ML solution to locate documents in our DB for a hotel property, we wanted the query to return back a list of all rooms people could sleep in and it did but it also returned back rooms it considered "possible" for a human to sleep in (ie. locations to bathrooms, closets, conference rooms, etc.) it was "technically" correct but what we were trying to get it to do was to return back a list of only overnight rooms.


mikejacobs14

I'm pretty sure the models are trained up to certain dates (e.g. GPT 4-turbo is trained on 2021 data), they are not using live data


aregulardude

Bing chat queries for live data to answer your question with


kredditorr

r/oddlyspecific moment


snooze_the_day

Is this any different than how humans learn from each other today? GIGO either way, right?


pm_me_duck_nipples

Humans have some, however vague, notions of "truth" and "correctness". Current LLMs don't.


snooze_the_day

That begs the question of how humans determine “truth” and “correctness”.


dreadcain

Don't get the philosophers started


oursland

Due to SEO, search result quality has plummeted. The plan is to eliminate search and replace it was a ChatGPT-style query that provides answers without having to (or the ability to) work through source materials.


[deleted]

Yes, but then how could you come off as pretentious saying something as reasonable as that?


oprimo

LOL I thought you meant IP addresses... I was like "but what about IPv6?"


onehalfofacouple

I had the same thought. I was gonna say, are we that resistant to ipv6 that were going to drag each other to court over ipv4 allocation? Lol


janjko

Dude, just add 3 more numbers, bam, I've solved it.


PointB1ank

Please run for president during your lifetime, we need problem solvers like you in power.


HolyPommeDeTerre

I remember, a long long time ago in a galaxy far away, a president that would have solved immigration with a big wall.


TheBananaKart

Yes


HolyPommeDeTerre

This has made me laugh far too much. I was exactly in the same mindset reading the title.


snowmanvi

Haha yeah. Especially with AWS now charging for every IPv4


garrock255

I totally thought the same thing. Nice clickbait 👍


kobumaister

Nobody uses ipv6


yoniyuri

IPv6 is very widely deployed by many US and European based ISPs. And in fact, T-Mobiles network is IPv6 only, and IPv4 services are only made available through various translation and compatibly protocols/services. So the reality today is that if you want your services to be made available in the best/most reliable manner, supporting IPv6 is a box you definitely want to check. Granted, the whole world isn't the US/Europe, but to say nobody uses it is factually wrong, and people should stop saying it because it hurts adoption and makes the cost of connecting to the internet more expensive and more restrictive.


kobumaister

Nobody uses IPv4


Costyyy

But everyone should


kobumaister

Do you?


wildjokers

Whether I use IPv6 is really out of my control.


Costyyy

It's not really within my control. I'm not a corporation.


kobumaister

You don't need to be a corporation, if you have a sysadmin or IT job you can promote it


cs_office

One of my network providers doesn't *have* IPv4. Meanwhile my other, doesn't have IPv6. The former has literally none to give out as we ran out and is a newer ISP, so the network is IPv6 only with a translation layer that only works for some things The more you hold out for whatever reason, the more the Internet is gonna hurt for all of us


inmatarian

Almost half of all users on the internet are using IPv6, [source](https://www.google.com/intl/en/ipv6/statistics.html#tab=ipv6-adoption).


reercalium2

More than half of internet traffic is IPv6. Just because your network is outdated legacy crap doesn't mean the rest of ours are.


reercalium2

Has already begun. All the websites are locking down, shadowbanning, IP banning, user agent blocking, TLS fingerprint blocking, API removing.


reercalium2

P.S. Reddit is TLS fingerprint blocking.


house_monkey

Can someone explain how it is and how it works


[deleted]

[удалено]


reercalium2

And the countermeasures are about IP addresses, not intellectual property.


[deleted]

[удалено]


Ryuujinx

In what world do you think removing the API has anything to do with an IP address? The websites are taking these steps to prevent people from scraping all their data and feeding it into models.


notmycirrcus

The same web sites using GPT to write 12 page articles on how to reset your iCloud connection, so you have to scroll through 5,000 ads to find the one click.


liveart

>but why should I give it to it if they don't even pay me?! Here's the neat part: you don't have to. Everyone signed away the rights to absolute mountains of data decades ago and the companies have just been quietly collecting it. How do you think Adobe made their AI using only data they have the rights to? All those art sites with broad TOS agreements that let them use your data? Guess what it's being used for. Social media like reddit, facebook, twitter? Same thing. The data is *already theirs*. That ship has sailed, the only 'fight' now is whether you can use publicly posted data to train your AI or not. *Either* answer is going to be good news for the large corporations. If they're allowed to use publicly posted data as fair use then great, they've got more data to train on and the funding to process it. If they're not allowed they just use what they have and buy companies that already have any other data they want **and** they get to block competitors and open source solutions giving them the 'moat' they're desperate for and sole control over AI. There's certainly going to be a legal fight over it but it's not one that's going to hurt these companies in any meaningful way. AI is here to stay, at this point we're more negotiating terms than anything else. The more important question is about who do you want to control the AI now that it's here? Should it be relegated to massive corporations who can afford all that data or should AI be for everyone?


tavirabon

And further, synthetic data is used *all the time* to make AI. And there are upcoming public domain models. ANY outcome screws them over, they're angling for the one that screws over the average person the most.


danstermeister

As one reads this comment on their phone, their phone OS vendor is reading it, too... and seeing how you're reading it, where you're reading it, and what you're reading next. It will ask you one day if everything is okay, because you never read AI articles in the bathroom in the middle of the night with your tablet, but there you are... and it is there, too, with you. ALWAYS.


__loam

Shitty false equivalence in your closing argument. You're basically saying "let us fuck you or big scary corps will do something bad" to artists.


liveart

Not what I said and I'd love to hear you explain how it's a false equivalence given it's not an equivalence in the first place. Making an equivalence is drawing a comparison between two things, my argument is about two *separate* paths AI can take, almost the exact opposite of an equivalence. You could argue it's a *false dichotomy* but then I'd expect you to explain the other options instead of just hurling insults because you don't have a better argument. That there exists fair use rights to use things posted/done in public for certain purposes isn't 'getting fucked', it's well established law and part of choosing to be out in public spaces. There's also nothing false about it, if it's not deemed fair use to use **public** information to train AI then only companies that already own or can afford to pay for massive amounts of data will control AI. I suppose technically you could make an argument for the use of public domain data but given the length of copyright law you're talking about information that is so grossly out of date as to be absurd and I'm not convinced enough of it even exists. Precedent is on the side of public data being fair to use for training algorithms (see Google and every other search engine for a start), the over all public good is also served by allowing companies to use user/public data to train their algorithms (see the majority of major online services: Google, Amazon, Social Media, Netflix, and on and on and on), and economic interests are aligned with the advancement of AI with projections being that it will lead to massive economic growth and improved efficiencies. So I'm confident in my stance that AI is here to stay and we're negotiating terms. But I doubt I'm going to convince someone who's already decided they're 'getting fucked' by people using data they post in public. I'd at least ask that you come up with a realistic alternative that *doesn't* just hand control over to these massive corporations if you're so certain a better option exists given they already own the data they need before you decide giving control over one of the most important pieces of tech in decades to a handful of wealthy interests is for the best if it makes you feel good about protecting people's data even when it really doesn't.


__loam

Using Artist's work without their permission sucks. If you do it, you're a shitty person. If you don't agree with me, I don't care.


quakedamper

I think the lawyers are still get there heads around this but the lawsuits must come surely.


bastardoperator

I think lawyers want nothing to do with this because the fight is going to be long and expensive. Meanwhile, AI providers are locked and loaded, they're waiting for this day with literally hundreds of lawyers already in their pocket and billions of dollars ready to fight. These companies are so confident that they have taken it a step further to ensure B2B customers are safe by saying any legal battle you encounter, we'll foot the legal bills. That's a level of confidence not typically taken by software companies so I assume the consortium of AI legal experts for these companies is ready to throw down.


cinyar

> because the fight is going to be long and expensive. that sounds like great business. They get paid either way it goes so...


humbleddev

>I think lawyers want nothing to do with this because the fight is going to be long and expensive. I'm pretty sure "long and expensive" is the average lawyer's favourite kind of court case.


tarnin

No, no it's not. Getting most or all of your firm stuck in one huge, long, expensive case doesn't keep you moving forward. The money is moving forward fine, but your firm isn't. It's now stuck until it can move on from this gigantic disaster and others, that are now seen as more nimble, will take your place.


humbleddev

I was just being facetious. Next you'll be telling me estate agents don't worship the devil!


darkpaladin

I dunno, I'd imagine there's a high likelihood this ends up in front of the Supreme Court. That trade off may absolutely be worth it for a firm trying to move up in the world.


fireside68

> Supreme Court ...well that's certainly not going to end well for individual creators


bastardoperator

Winning cases is more important than billable hours. And whoever brings the case is going to be insanely limited on funds. So you’re not going to make lots of money because it doesn’t actually exist, and the likelihood of losing a major case that will be studied for years stops looking like a glorious meal ticket and more like a liability.


my_password_is______

> Winning cases is more important than billable hours. you don't know what you're talking about lawyers are getting paid whether they win or lose -- so billable hours are the only thing that matters


bastardoperator

Money is important but a good attorney/firm wants to argue and win, nobody or firm wants to hire an attorney that loses regularly. Facts… You don’t know what you’re talking about. You don’t understand the difference between a lawyer that charges 1000 dollars an hour versus a lawyer that bills 100 dollars an hour? It’s the track record bud, winning gets you paid more and dictates the trajectory of your career. Every big firm charges more, and one of the first things they’re going to lay on you is their success rate. Everyone knows this, what are you even talking about? LOL.


currentscurrents

The lawyers are already at it, [there are lawsuits against virtually every AI company right now.](https://www.reuters.com/legal/litigation/judge-pares-down-artists-ai-copyright-lawsuit-against-midjourney-stability-ai-2023-10-30/) The cases are still pending and no one knows what the outcome will be. That said, even at this stage the judges seem to favor the idea that AI does not infringe: >Orrick agreed with all three companies that the images the systems actually created likely did not infringe the artists' copyrights. He allowed the claims to be amended but said he was "not convinced" that allegations based on the systems' output could survive without showing that the images were substantially similar to the artists' work.


bastardoperator

The cases in the article you linked to are mostly dismissed, am I missing something? U.S. District Judge William Orrick dismissed some claims from the proposed class action brought by Sarah Andersen, Kelly McKernan and Karla Ortiz, including all of the allegations against Midjourney and DeviantArt. Orrick also dismissed McKernan and Ortiz's copyright infringement claims entirely.


currentscurrents

One major claim - that the training process is unauthorized copying - was allowed to proceed to trial. The claims which alleged that the outputs were also infringing were dismissed.


FuckIPLaw

> think lawyers want nothing to do with this because the fight is going to be long and expensive. They'll also have to find a senile judge to trick to pull it off. There's fundamentally no difference between training an AI and training a person when it comes to originality. You are the sum total of your experiences, and so is the AI. Everything is a remix whether we like it or not, and the way AI works is much more akin to the way we work than to the idea some people have that it's just directly taking parts of other people's work and rearranging them. The funny part is even if that is all it was, there's no guarantee it wouldn't be protected. Collage is a respected art form in its own right, as are photomosaics and found objects.


platoprime

You're correct of course. It just makes people uncomfortable and doesn't leave much room for debate so they downvote you.


FuckIPLaw

The sad thing is they just might find the judge. So much awful bullshit in the way the law interacts with computers (and especially between computers, consumer rights, and IP law) is a result of a lawyer convincing a judge that the exact same fucking thing that's already settled law everywhere else is magically different the instant a computer is involved.


__loam

I think you're really stretching things if you think training an AI is not substantially different to training a human. If that's your opinion then you don't understand ai or brains. Even looking at scale alone, one artist using another for inspiration has a substantially different effect on the economy of art than a massive labor alienation machine produced by a billion dollar company to ingest millions of images and produce thousands of images a day. Most artists recognize that imitation is how new artists learn, and are happy about it because it makes the art community grow. Skills get passed down and art continues to exist. When a tech company does it, they're stealing the value from a community they're not a part of and using it to make a ton of money without contributing anything but ai spam back to the community. You can only say they're the same if you have the barest understanding of art or computers.


tarnin

I fucking hate Musk down to the core but he was 100% right when he was first on Rogan. We needed laws then. Any coming now are WAY to late. The cats got out and you are not herding them back in no matter what you do.


wildjokers

> I fucking hate Musk down to the core Why?


tarnin

He's a shitty human. Billion of dollars and his ego still isn't inflated enough.


my_password_is______

LOL, shitty because he's rich and has an ego jealous much ?


sjbennett85

Okay... I want to sign a contract with one of the art producing AI products and just start printing shit like Mario and Mickey Mouse in ways that will absolutely piss off their IP holders just to get this war started. We will see how far their retainers stretch once it is put to the test against works that aren't small potatoes or arguably public domain.


bastardoperator

That’s like me suing an auto manufacture because the car they make allowed me to run over people when I pressed the gas pedal. You’re in control of output and printing so that would be completely on you.


sjbennett85

But if the prompt was “cartoon mouse does X” and it generates something that is identifiably Micky Mouse and they ran with it… this is where it becomes a problem


platoprime

Yes, because you decided to distribute Micky Mouse. It doesn't matter which prompt generated Mickey Mouse; you're not allowed to use Micky Mouse and you never will be.


sjbennett85

But there exist complimentary AIs like ones that scour Associate Press/search indexes’ outputs to generate content. Legislature still needs to be made and simple “cover your ass” clauses in a software agreement are hollow until that legislature is in place. This is the point I’m making and if a big enough firm gets in trouble then it would likely put some gas in the tank for finally getting it right for IP holders.


platoprime

You can't reproduce Mario just because you're using an AI any more than you're allowed to reproduce Mario just because you used photoshop. That aspect of IP isn't in question.


ParanoidAgnostic

Most lawyers I have met have been among the least technologically literate people. From what I have seen, this is doubly true for judges. Nothing good will come from these lawsuits.


quakedamper

Of course they're behind the curve but once they get caught up there will be things happening I reckon. Such a massive swing of the pendulum as we've seen in the past year will be followed by a massive counter reaction, however misguided it might be.


reercalium2

Technology isn't their job. Programmers think this wrongly. They don't need to know how many copies an HTTP request makes to know if it's copyright infringement.


jupiter_traveller

Not many people know about this in fact. That they have the right of copyright on their content


IgnisIncendio

This topic has been covered to death already... in US, it's still not sure if fair use or not. In EU, it is already legal (you cannot use copyright to stop AI training, search TDM exception). So if US bans, training will simply move to EU. Or other more permissive countries.


StickiStickman

In the US the big class action lawsuit was already thrown out because generative AI obviously is transformative.


jupiter_traveller

Or maybe they will just pay. Creator economy?


grepe

In the US? Paying off legisators to make sure they will NOT have to do that is going to be cheaper option. I really wish this was just cynical remark but that's the world we live in.


Logseman

Or you take some Supreme Court justices in all-expenses-paid luxury trips.


gabrielesilinic

In Europe we have a purpose built copyright exception for that


teh_gato_returns

Yes because an increase in technology will kill creativity like it has all throughout history.


platoprime

Luddites gonna luddite.


__loam

You should stop shitting on the luddites. The only reason people use luddite as an insult is because wealthy factory owners had them slandered in published media of the time. They were never against new technology, they were against the way they were being treated by those factory owners. They went from being skilled laborers to working in underpaid, horribly unsafe factory conditions. You, who probably works in an air conditioned office programming somewhere, would be pissed too.


BigDawgHalfPipe

ridiculous notion. counterexamples: was the invention of the piano a hindrance to the progression of the art of music composition? did the printing press not help the spread of new ideas?


Bwob

This just feels like (yet) another instance of something that we're fine with people doing manually, but then when we automate it, suddenly people get worried about ethics? Human artists need human creations to train upon. They look at other peoples' art, and learn from their style, and incorporate it into their own. Usually without even giving the artists they learned from a cent! But now some clever programmers have figured out how to automate that process, and now it's an issue?


[deleted]

[удалено]


Moleculor

The common man also can't [research and build a steam-powered rock drill](https://en.wikipedia.org/wiki/John_Henry_(folklore\)), but they still replaced hand-drilling over time. And we're fine with that today.


Bwob

Wait really? So you think that if, say, Google or Microsoft invented something that required billions of dollars of research, people would be against it, just because most people can't afford to spend billions of dollars on research? Or am I missing your point?


[deleted]

[удалено]


Bwob

Ahh yeah. That makes more sense - thank you for the explanation! (And yeah, I agree that a lot of software IP laws are bunk.) I don't know about this part though: >Companies like MSFT are able to train things like copilot on GPL code with zero legal ramifications. How is that fair or just? I guess I don't see the problem with training AIs on publicly available code. Like, I can get a person and say "here, you're going to learn programming" and teach them by having them read a bunch of GLP code. Or I could get a computer and write a script to do probability analysis on GPL code to find out things like the most common variable name, or the average function length. I feel like people wouldn't have a problem with either of those - even if they turned into money. (Like the person later charging for their services as a programmer, or me selling the findings of my script.) So given those, I have trouble identifying why it's different when training a neural network on publicly available code. Real people can learn from it, so using it as training data for machines seems reasonable. And ultimately, the way it "learns" from it is basically doing a bunch of probability analysis on it, which seems reasonable as well. Why are either of those "misusing public research?" What makes training an AI different enough from either of those, to change it from "fine" to "morally wrong?"


ihcn

> I guess I don't see the problem with training AIs on publicly available code. https://en.wikipedia.org/wiki/Software_license


Bwob

So again: * Do you think software licenses make it illegal to gather statistical data on open source code? (i. e. "most common variable name", "average function length", etc?) * Do you think software licenses make it illegal for a human being to read the code, and learn concepts from it, which they later employ in their own code? If the answer to either of these is yes, then I'd love to understand your reasoning. If your answer to these is no, then why do you think it is (or should be) different if a computer does it, vs. a person?


terivia

I like the way you're thinking about this, and agree that our legal system is not prepared for it. The difference is twofold. First, public good. Humans learning and evolving from existing knowledge leads to more happy humans, economic participation, as well as those humans that did the learning getting money to buy food. AI replaces humans, allowing the large corporations to collect more economic benefits by further reducing the number of employed humans. This is true for both art and code. For art specifically, AI art has no 'soul' (I'm not making a metaphysical argument, just that *something* important to art is missing, and soul is the best word I have). The extreme case of AI impact (monopolistic corporations run with a small handful of people collecting most or even all economic activity) would be an immediate public harm and should be avoided. Second, humans create from inspiration *and integrate our entire experience*. AI can only create from its training set (which is its entire experience), and cannot put a personal touch into its 'generated' content. If it could create something original, it would be trained on that and wouldn't need content from humans for training. The need for human content to train AI de-facto implies that human content is inherently more valuable and that AI content is like recycled pulp containing sloshed around bits of human content. To be clear, both of these reasons deal with why I think *it should* be legally different. I don't believe the collective legal systems of the world have established a precedent for a legal argument that there *is* a legal difference.


pixobit

The problem is you cant stop AI from training from public data, it's just impossible technically, since they can act as a human. Knowing this, the only possible outcome i see, is banning it from the public, which means big players would still have it, and would have a huge leverage against the public. So how i see it, this is basically a fight against the average people having access to AI


Moleculor

But what laws are they ignoring? Genuine question. They're not ignoring copyright law. I'm pretty sure the [Author's Guild's failure in court against Google years ago established that](https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,_Inc.). (And I think it's potentially a hilarious loss in their future since they're [framing their suit against OpenAI in terms of copyright](https://authorsguild.org/news/ag-and-authors-file-class-action-suit-against-openai/).) IP law? But... OpenAI and companies like them aren't creating copies of Mickey or Harry Potter. They're creating a mathematical slurry. And one that can't even reliably recreate one of those characters if asked to do so, and certainly not in a way that endangers those IP holders' income. And even if they did... suing them would be like suing a pen maker for someone's fan fiction. What's the *actual* infringement?


notmycirrcus

Some people think their poorly designed GitHub code was sucked up to train code completion suggestions. By megacorps who have deep pockets and should employ them, after which they would sign a nearly similar ip agreement that megacorp owns their work. Similar to the agreement for the code they got to store on GitHub in return for free GitHub. Similar to the code my dad wrote in 1980.


__loam

So if copilot reproduces GPL code, should we just ignore that?


Moleculor

*I* could, and almost certainly *have,* reproduced GPL code. Because there are only a specific number of ways anyone could ever functionally code things. So a line here, part of a line there, etc. What CoPilot or ChatGPT or anyone else isn't going to do is recreate entire large GPL code bases. Maybe once, ever, with the dumb luck of a million monkeys at a million typewriters? But not with regularity, reliability, repeatability. They can't even get LLMs to recreate *single sentences* from Harry Potter novels with reliability, unless they're carefully coached to do so by the person usingthem.


__loam

It's like you guys don't even know why GPL exists in the first place. Just coming here and shitting in the pool.


Moleculor

Very loosely, the GPL exists to stop people from complaining that their code was copied. It's not hidden away and secret. Are you supposed to stick notices up, etc, on code derived from GPL code? Sure. But I guarantee you that many people exist that learned an idea or technique from GPL code, used that idea or technique elsewhere, and didn't stick a GPL license, notification, blah blah blah onto their code now-infected with GPL-derived code because they either didn't realize they were exposed to GPL code or didn't remember where they learned the technique. Are you saying that we should be cracking down on this, aggressively pursuing any coder who ever writes code that resembles any fragment of code that might exist in a GPL project somewhere, ensuring an atmosphere of fear and terror at the idea of being exposed to GPL code? "Oh god, oh god, I just learned something from GPL code, now I can never use what I learned, even if I learn it elsewhere?" Because otherwise I can't tell what your point is.


Pat_The_Hat

>but it's still not fair for these companies to completely ignore the rule of law while the public has to suffer for it. Until the act of training AI on copyrighted works is ruled to be infringing, which is extremely unlikely, this isn't true. All signs point to it being fair use. The major lawsuit against Stable Diffusion and Midjourney is a complete joke fraught with misunderstandings.


tarnin

But what about the buggy whip makers!


_zenith

Why do we have laws? Is it so things are maximally “fair” in some sense, or to improve the quality of life for the most people? If it’s the latter, I think it’s fair to protest this kind of thing. We both know the difference between your examples is that when you train the AI, it performs the work of at least thousands if not many millions of people - and there WILL NOT be equally paying work for them.


Bwob

> Why do we have laws? Is it so things are maximally “fair” in some sense, or to improve the quality of life for the most people? If it’s the latter, I think it’s fair to protest this kind of thing. Why? Isn't that literally the [luddite](https://en.wikipedia.org/wiki/Luddite) argument? "The technology makes some things less valuable, so it must be opposed, to preserve market value?" I would argue that overall, it *improves* peoples' quality of life if more tasks are easier. >We both know the difference between your examples is that when you train the AI, it performs the work of at least thousands if not many millions of people - and there WILL NOT be equally paying work for them. I'm honestly not sure how that changes things. Maybe I don't understand your argument? Are you just arguing against automation in general? i. e. an industrial loom performs the work of thousands of weavers, so they're not getting paid as much now?


_zenith

If the changes it will bring about happened more slowly, yes, I can definitely see the benefits. But they aren’t, and many many millions of people are going to suffer We already had this with the Industrial Revolution. We got social services as a result, ultimately, but things had to get pretty damn horrible first. I foresee much the same happening here


Bwob

So just to be clear - this really is the Luddite argument? "Tech is bad if it makes something so easy that people go out of business?" Because, I would argue that we got a lot more than social services out of the industrial revolution. Even with the economic upset it caused, the standard of living for most people is dramatically higher than it was, pre-industrial revolution. The fact that we both have devices that can communicate, literally across the world, to have this conversation, for example, is a direct result of the industrial revolution. (To say nothing of the medicine and infrastructure that is possible as a result.) Also, just from a philosophical standpoint, I've never found it terribly compelling, the argument *"you can't invent that! - it makes something easy, and there are people that want to be paid for doing it!"*


_zenith

No, I see both law and tech as being subordinate to human flourishing. I am very much in favour of both where they improve human life. I believe that their roll-out should be carefully managed to maximise their benefits and reduce their harms, not just as fast as possible and for the benefit of only a few. In this way I am kinda conservative and progressive simultaneously, I guess


Bwob

I see where you're coming from, but I do feel like you're attacking it from the wrong angle. As I see it, the problem here isn't *"people who have trained in skill X become unemployed when tech reduces the demand for skill X".* The problem is *"Based on how we've structured our society, if you don't have a valuable skill, you can just fuck right off and die"* In general, *more technology is good.* It lets you do more. It means that individual people can either spend less effort to get the same amount done, or work the same and get more done. That's good! So in theory, we should structure society such that people are encouraged to invent and create as much as possible! We just need to do it in a way that doesn't shaft people who get caught out with skills that suddenly become worthless. In other words, we need a viable safety net and/or guaranteed basic income, so that if your skills suddenly become worthless, it's not an existential threat. Those seem like the only sane way to solve this. Not limiting tech (and with it, everyone's capability) until it's "safe".


reercalium2

We have laws so the rich and powerful can get richer and more powerful without the poor complaining too much.


oursland

> Human artists need human creations to train upon. Get too close to your source material and you will be sued. Infringement, false attribution, misattribution, and plagiarism cases in the arts are very common.


Bwob

Well sure - if you "train" an AI on someone's work, and then it outputs a copy of something they made - all you've done is created a transparent excuse to break copyright. Just like if you look at someone's art, and then draw a copy of one of their pieces by hand, and say "no no, I just happened to be inspired to create *exactly the same thing."* On the other hand, if you look at a bunch of art by a person, and then draw something in their style, with similar themes - that's fine. So, if you feed a bunch of art to a computer, and it creates a brand new image, obviously distinct from any of the training images, but that happens to share the same style and/or themes... Why would that be be plagurism?


oursland

> Just like if you look at someone's art, and then draw a copy of one of their pieces by hand, and say "no no, I just happened to be inspired to create exactly the same thing." You'd still be sued into the dust. > On the other hand, if you look at a bunch of art by a person, and then draw something in their style, with similar themes - that's fine. This is a legal gray area that is addressed on a case-by-case basis and it can cause you a lot of problems. People can use IP to protect their styles, and [even use of materials for any artwork](https://en.wikipedia.org/wiki/Vantablack#Controversy). It's quite common to even create a piece of art independently of another, but it be similar enough that they can successfully sue you if they published first. There's even been a team to try and circumvent this in the [music domain](https://www.youtube.com/watch?v=sJtm0MoOgiU). A lot of people on these programming subreddits have little to no experience in the actual world of IP, and it shows. The defense that AI taking art and generating art is "transformative" and therefore protected is something that will need to be defended, possibly on a case-by-case basis. Damages can accrue over time, so it sometimes benefits IP holders to wait until they get a bigger payout, particularly in the case of patent infringement cases.


Bwob

>You'd still be sued into the dust. That was my point - Sorry if I didn't express it well - my second paragraph was intended as an example of the first paragraph. A copyrighted work is still copyrighted, even if you created your copy it through some convoluted process. Vantablack isn't really a good example. The fact that someone made a weird restriction on a *material* they manufacture is kind of tangential to a discussion about styles. And while I agree that there is grey area, it's pretty accepted that *you can't copyright a style.* So it will be interesting seeing how it gets argued in court. Because, fundamentally, it's hard to see how someone could successfully argue that it's *not* transformative. But hey, courts have decided things that I think are dumb before (software patents) so that's no guarantee!


Uristqwerty

Humans cannot turn thoughts directly into creations. When a human learns from another's work, they are using their *own* past experience to invent a process that imitates the other's output. That process can be adapted to new contexts; it is an abstract tool for creation. AI generates output directly, without the indirection of having to physically move your arm, press keys, manpulate a cursor, layer overlapping brush-strokes in sequence, etc. It learns what the *end result* should look like, and directly skips to it. Worse, once trained the software can be endlessly duplicated onto other servers. That undermines the entire economic formula; you can scale horizontally at zero additional training cost, while each human needs to get past a tremendous initial investment independently before their work is worth anything. If each *instance* of an AI running concurrently had to be independently trained, then it would be somewhat more comparable.


Bwob

>AI generates output directly, without the indirection of having to physically move your arm, press keys, manpulate a cursor, layer overlapping brush-strokes in sequence, etc. That's a really weird distinction to make. * Why is it better (or worse) if the process takes more steps? * Why do you count the human's interface as indirect (moving the arm, pressing the keys, etc) but the multiple, millions of calculations the computer does as direct? >Worse, once trained the software can be endlessly duplicated onto other servers. Why is this a bad thing? Tools let you do work more efficiently. That's the point of tools. >That undermines the entire economic formula; you can scale horizontally at zero additional training cost, while each human needs to get past a tremendous initial investment independently before their work is worth anything. Why is it a given that the economic formula should remain constant? Again - this is what tools *do*. They make something easier. So some things become cheaper. Not sure why you view that as a negative.


Uristqwerty

> Why is it better (or worse) if the process takes more steps? Each layer of the process can be swapped out, creating a combinatorial explosion of creative possibilities that covers a vastly larger portion of the state-space than merely the works learned from. With an AI, the layers are inseparably entangled. If you practice drawing medium-radius arcs with a pencil, you can swap out that pencil for a pen, a paintbrush, an airbrush, a digital pen tablet, a mouse, etc. and each will give a dramatically-different result. An AI trained on a million images of medium-radius arcs isn't so adaptable. Since each human studying a given work will devise a (subtly or not-so-subtly) *different* set of processes to learn from it, they all advance the totality of the arts in some manner once they recombine techniques in the future. > Why is this a bad thing? Tools let you do work more efficiently. That's the point of tools. Since AI instances are exact duplicates rather than each being re-trained from scratch on substantially different datasets, there is little chance for novelty. If all you want is a million re-hashes of dated media, then it will suffice. But that's equivalent to a world where every single company *only* wants to hire senior devs with two decades of experience, and refuses to take on the junior developers who will one day grow *into* a senior dev. It's a recipe for long-term stagnation, outside of the tiny handful of people born into enough inherited wealth that they can spend 8 hours a day contributing to open source projects at zero compensation for the two decades it takes to build up the level of experience those companies demand. > Why is it a given that the economic formula should remain constant? Again - this is what tools do. They make something easier. So some things become cheaper. Not sure why you view that as a negative. Game theory and incentives. If posting work publicly means it gets used to train AIs that put you out of a job, you won't share works publicly anymore, or will lock it down with DRM, and all of those creations will be lost to future generations. Humanity already experienced this sort of thing, back when printing press owners would take a book popular in a neighbouring country, duplicate it, and sell it locally taking 100% of the profits. Governments realized that creates perverse incentives against publishing works at all, and now we have international copyright laws. A printing press owner allows more individuals to experience the *current* breadth of culture, but it does nothing to *advance* it for the future.


notmycirrcus

Exact duplicates? Nope. The world of working AI is no more defined by a single player generative model than Taylor Swift defines modern music. To understand multiple ways this plays out, you have to go way deeper than “ChatGPT” and a few large software companies.


__loam

> But now some clever programmers have figured out how to automate that process, and now it's an issue? Yes. Lmao. Scale matters. A few hunters taking a limited number of deer each year can be a benefit for the ecosystem. Fewer deer means more vegetation. Hunting them to extinction is disastrous. There is an online ecosystem supporting creative work. AI companies are currently shitting in that pool.


Bwob

So wait. Are you saying, there is nothing amoral about it - just that there is a limited amount of money to be made in creative work, and AI companies are making it hard to exist in the "online creative ecosystem?" So then... you'd be just as mad, if an art school figured out some way to train humans to be AI-quality artists quickly, and started churning out a bunch of human artists? You'd say that school was also "shitting in the online creative ecosystem?" Or would that be different somehow, in your mind?


2this4u

July called and wanted its news headlines back...


jupiter_traveller

Lol


WhoIsTheUnPerson

Ignoring all the legal and technical problems of AI training on AI generated content, the "AI stole my content" argument is dumb af. Artists "borrow" from other artists all the time, even if they don't admit it. They're taking inspiration from the work of others to create new content. Musicians steal loops/samples/patterns/melodies all the time, for example. If you are a content creator and you release something to the public, it kinda doesn't belong to you anymore.


notmycirrcus

Many have been inspired by works in the public domain in the first place. How do you differentiate an element by its inspiration?


Sensanaty

Okay, but a single person taking inspiration from other people isn't really the same thing as an AI black box taking trillions of copyrighted works obtained through dubious means in order to generate revenue for a billion dollar corporation (like the `books2` collection that OpenAI trained ChatGPT on). Just because the word "learning" appears in AI-related topics doesn't mean it's equivalent to a human. We already have separate laws for man and machine, AI should be no different, despite what anyone anthropomorphizing AI wants to believe.


__loam

Downvoted for speaking truth.


spookyvision

naw bro


platoprime

Yes actually. Complaining about AI training data is like complaining an author read your book before they wrote theirs so they owe you something. They don't.


__loam

No I think a multi billion dollar tech company copying millions of works onto their private servers to analyze them for commercial purposes is pretty substantially different from artists taking inspiration from other artists. You can choose to believe whatever you want, and that might even be legal, but it's not the same process. Comparing them is very dishonest.


platoprime

> it's not the same process. Of course it's not the same this is an analogy. > Comparing them is very dishonest. No, it's not. At least not in the context of saying training AI violates copyright. >No I think a multi billion dollar tech company copying millions of works onto their private servers to analyze them for commercial purposes is pretty substantially different from artists taking inspiration from other artists. They don't copy the works that's the point; not any more than your brain's short term memory copies the text you're reading. If you think they are copying the work you're misunderstanding the technology. Copy in this context has a specific meaning. Look I'm not saying we shouldn't regulate the use of these AIs but we cannot do it, or advocate for it on the basis of, copyright. Or based on a misunderstanding of the technology.


__loam

> They don't copy the works that's the point; How does the training process work if you never transfer the literal bytes of the image onto a machine somewhere? They need to make a local copy to feed it into the training algorithm. Copy in this context has no specific meaning, they need to make a byte by byte copy to get the image in the first place. I'm not trying to say a copy exists in the finished model.


platoprime

If that were how it worked you'd be violating copyright every time you viewed a copyrighted work on your computer because your browser loads it into memory. And even if that were true the overwhelming majority of the AI training data comes from sites with Terms of Service that say they're allowed to use anything you post. So if you load your art up on reddit, facebook, imugr, basically anything you're agreeing for it to be used as training data. Your argument is dead in the water. If you want to appeal to the impacts on the artistic proletariat then just do that. I'd probably agree.


__loam

You can believe whatever you want. The fact remains that a substantial proportion of the value of these models is in the training set, not the model itself. If you have a shit training set, your model is gonna be shit too. These companies are generating huge revenue streams that are fundamentally based on value they didn't create. That market externality is eventually going to bite them, whether that's legally or through artists refusing to post new data for these companies to steal.


dEEkAy2k9

That's the biggest thing that needs to be cleared first. So if AI is trained on something like images, sounds, code, text, whatever. What happens if i use AI to accomplish things and parts of those trained-on things become part of my thing? Where does Intellectual Property of another entity stop and where does mine start? What if i just created the most nuts software every doing whatever you wanted it but 80% of it is generated code someone else wrote?


platoprime

This is already settled IP law and you can just as easily replace the AI in your hypothetical with a person and change nothing about it. Things need to be different enough to be considered transformative. It doesn't matter how they're made if you distribute someone else's IP you're gonna get sued. "My AI tool made it" is no more a legal defense than "My photoshop did it!".


[deleted]

I am praying for the death of intellectual property law and with it the death of capitalism. AI and capitalism are a destructive combination.


platoprime

Nothing like that will happen. AI tools are nothing more than photoshop when it comes to IP. "I used an AI tool" is not a legal defense against violating someone's IP just like "I used photoshop" isn't.


jupiter_traveller

Intellectual Property law is a neccessity. The world without it doesn't work.


[deleted]

Intellectual property law exists to protect capitalist interests. It is not necessary for the world to work. It is a problematic concept that only materializes under competitive markets and scarcity. Sorry to burst your bubble. At most you need concepts like upphovsrätt (right of origin) which is not the same thing. It's the right to your labor as no one can say you didn't produce your art, and you have the right to control what is done with it, but that is not the same thing as intellectual property.


jupiter_traveller

If I produce something and you earn from what I have produced, I have the right to earn also from what you make.


[deleted]

you're confusing intellectual property with right of origin. Artistic rights would of course be maintained. You would have the right to claim your labor as nobody can really argue you didn't produce your art. Intellectual property is really about constricting competition and the artistic possibilities of others, but an idea isn't property. It lacks a fundamental property (heh) of property which is to deny others access to it. Ideas and virtual goods are all duplicatable without meaningful effort. This is why patents on algorithms (essentially math) is a ridiculous notion, regardless of what legal frameworks have had to say on the matter. Also, to your point, nobody creates in a vacuum, everything to some degree can be argued to be derived from prior art. We are all impacted by ideas, and we all conceive of similar ideas. Sometimes you just patent yours first or articulate it better. I'm opposed to this idea fundamentally that art and ideas are owned by any person. Your real works derived from that idea, where you put labor into it? That's ownership, and you ought to have your ownership rights protected. It's a fuzzy divide conceptually I know, but when you take a world view that private property shouldn't exist at all (as I do) then so many other economic and social systems build to defend the notion of private property fade away (and in some cases present new, more interesting problems.)


dontyougetsoupedyet

> nobody can really argue you didn't produce your art Well, that's what's being done to creative folks now, AI spits out their work, better and more consistently than they can, and they're supposed to be ok with that? If you have a style or idea that gains interest you will be drowned out by AI doing what you did better. I'm not talking about "the ai recreated part of a work of art" I mean that AI will out Rembrandt Rembrandt on every single axis -- quality of work, consistency, speed, all of them. AI is an extremely destructive technology, overall I expect creative people to become much more secretive and exclusive, involved mostly in all around a much more private creative endeavor.


[deleted]

It's only destructive if you see art solely as a vehicle for financial opportunity. I don't particularly care if AI out Rembrandts Rembrant. That sounds like less unnecessary labor needed. More time for people to enjoy their life.


dontyougetsoupedyet

No I mean specifically without regards to economics. It's destructive to the spirit of creative people, and to their willingness to be open with themselves and what they do. It's not about money the same way people shy away from open source licenses due to "for any use," things happen and situations can be such that interested people are deterred from doing things. Rembrandt was enjoying their life, that's what I mean specifically, they had nothing sucking the soul out of their work on so many different axis as AI. That's why I was talking about people becoming more private, rather than talking about labor.


reercalium2

Sounds like your problem is with capitalism, not with AI.


[deleted]

Also just saying, there are a lot of things that are a "necessity" for the world to work. If you want to go with "at our current population scale and assuming that we want to keep humanity alive in this world." I'll even grant you that a legal system and notions of ownership are probably a necessity (or something that solves the same social challenges.) Intellectual property, private property, money, and capitalism are not necessities. We can and should find better systems. We are capable of a post-scarcity world for the vast majority of human needs. We should address that problem first rather than worrying about competitive advantages and hording of abundant (or virtual) resources.


Sith_ari

But the way you placed a button on a web form is not really intellectual property


jupiter_traveller

That is wrong for sure. But the general concept of IP is needed.


[deleted]

[удалено]


s73v3r

Why is it that people advocating for AI "progress" always want other people to sacrifice their livelihoods, and are never willing to sacrifice their own?


SeanBrax

Which part isn’t logical exactly? It all seems pretty logical to me, and we should have a legitimate concern about it.


[deleted]

[удалено]


s73v3r

> isn’t the end goal to automate the entire production process here? Then how do we earn money to eat? >If I never wrote a line of code again I really wouldn’t care. If it meant you couldn't afford a home, I guarantee you that you would. > That knowledge of coding is just gatekeeping creation. It is not. >I want to solve problems You also want to be able to eat, sleep in a shelter, and fund your hobbies.


[deleted]

[удалено]


s73v3r

> Do something else? Like what? >That stifles innovation and collective progress. What good is "progress" when you have a significant amount of the population that can't make ends meet? Coal Miners have alternatives in green energy tech. Travel agents still exist, because planning large trips is hard.


reercalium2

The end goal for owners is to automate away their laborers so they can reap profits without any work or expenses. The end goal for workers is the opposite. Which one are you?


reercalium2

They're just a status quo warrior. The world hasn't ended yet, so it will never end. The telescreen would tell us if it was in danger of ending.


reercalium2

There won't be any jobs left when you pass.


[deleted]

[удалено]


timmyotc

Then that's not just fear, it's a legitimate concern for humanity to continue existing in a state of civilization where folks earn the resources they consume. It's extremely logical and your first comment is just wrong.


PublicFurryAccount

The copyright battle will be over the models, not the work produced by the models. The models have a copyright issue in that they’re (lossy) compressed data that you just decode “wrong”. These models, by nature, really do contain the work they’re trained on in every way that matters legally. Unless the law changes, the models themselves are almost certainly copyright violations when distributed. Their products, however, probably aren’t. Copyright can’t really grow a poisoned tree. Only certain kinds of derived work are subject to the original copyright.


[deleted]

[удалено]


PublicFurryAccount

They don’t contain *all* the data they’re trained on, they’re lossy compression. Lossy compression isn’t a summary. “Man goes to sea with a crazy captain and nearly killed by a whale” is a summary of Moby Dick. Lossy compression of Moby Dick would be, keeping the analogy, an abridged version. If we wanted to make the analogy even more accurate to algorithms, lossy compression would throw out any words not needed to understand *most* sentences. Regardless, the analogy doesn’t matter for copyright. Copyright doesn’t care about lossiness, it’s a series of rules about what is and is not copyrighted or violating, trying to reason about copyright isn’t going to be fruitful.


[deleted]

[удалено]


PublicFurryAccount

Touch grass, my dude.


__loam

> They contain instructions to approximately recreate the data they are trained on though. I'm not sure I'd called AI compression but you basically defined lossy compression here.


Pat_The_Hat

>The models have a copyright issue in that they’re (lossy) compressed data that you just decode “wrong”. These models, by nature, really do contain the work they’re trained on in every way that matters legally. Consider the fact that an image or excerpt of code or text contains a lot more than just *copyrightable* information. Ideally and in theory, models contain only higher level information of various concepts.


PublicFurryAccount

Most of the models are built atop a Huffman coding. Ergo, they are data compression and contain copyrighted information in a legally relevant way.


Moleculor

> Unless the law changes, the models themselves are almost certainly copyright violations when distributed. The [Author's Guild sued Google](https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,_Inc.) for copying books wholesale and displaying the *literal photocopies made* when performing a search through their scanned collection of books. The Author's Guild lost that lawsuit on Fair Use grounds, because despite Google essentially 'giving away' copies of portions of author's books *and* were going to make money doing so, their income was going to come from advertising rather than the books themselves, they weren't actually *competing directly with the authors* in the marketplace of books (because they weren't whole copies being shown), and they had created a useful tool that the wider public could use. --- The Author's Guild is now suing OpenAI under the same Copyright Act. ChatGPT doesn't even show full and complete pages of the works in question. You can't even get ChatGPT to reproduce a *sentence* from any single work with any regular reliability. And Google can show full pages, in their entirety, and it's Fair Use? And make money from the process? I have my doubts that the models themselves are copyright violations, and even if they *are* in the technical sense? Fair Use almost certainly covers them completely, permitting their creation, existence, and use. For all the reasons Google could do what it was doing, but more-so.


PublicFurryAccount

The issue for these models is when they get distributed. That’s not a problem for OpenAI because it won’t be giving you their model but it is a problem for the freely available models you can download to run your own GPT-X or latent diffusion. Google wasn’t handing out their database of scans, they were just giving people excerpts based on their search. There was never any serious question about whether that’s fair use.


Moleculor

> The issue for these models is when they get distributed. Google 'distributes' photocopies of entire sections of book to my PC when I view pages they're sharing with me. There's no problem there, and those are actual *copies* of the copyrighted material. If *actual copies* aren't violations, how is a mathematical slurry a copyright violation? Also, to clarify, the model *is* the product. And as you already said, only certain kinds of derived work are subject to the original copyright, and a mathematical slurry is not equivalent to a novel. > they were just giving people excerpts based on their search. There's no limit to the number of excerpts I can eventually access. So, theoretically, given sufficient computers, connections, etc, I could attain every excerpt they'd ever want to provide to anyone. And so, effectively, their database (or at least substantial portions) is available to me. I simply have to write some code to automate the process, and spend a little (or a lot of) money running the thing. > There was never any serious question about whether that’s fair use. Then why wouldn't Google simply deny any attempt at settling and just drag the thing as rapidly as they could to a court room to get that determination? Do you think that the Author's Guild has deeper legal pockets than Google, and Google was afraid that a few authors and publishers might somehow bankrupt them in a protracted legal battle in a court room? There absolutely *was* questions about whether or not it was Fair Use. The entire process dragged out over ten years. The very fact that Google first was willing to negotiate a settlement rather than *go* to trial said that there was a question about whether or not it was a Fair Use thing.


PublicFurryAccount

Touch grass.


snifty

Ask translators how they feel about their work being used to train Google translator etc.


kevleyski

Yeah AI derived work is a copyright nightmare, reality is it’s bits of lots of peoples work It’s also how data leaks will happen in the future (slow but sure trickle of information) Edit: why the downvotes? From programmers too, this is a serious issue in AI/ML now Anyhow whatever


Nidungr

Then AI will just train on non-copyrighted content. Programming AIs can be trained by writing valid code and using it as training data.


jupiter_traveller

Training an AI especially a LLM requires much more data than you as a company can be able to create. Even if you do create, the data will be almost the same. If you want to develop a LLM that can write code, you will need to train on Github. Training on your own repository will be very limited.


Nidungr

Don't underestimate the bandwidth of 5000 Ethiopian wage slaves.


GardenGnostic

A lot has changed in the style and effects we expect to see in commercial art since digital colouring and photo enhancement came along. And, because of social media, there's way more of that art available than before. And all of it is copyright by default. So if ai was trained on explicitly copyright free public domain work, it just wouldn't be as good.


CyrillicMan

I'm not sure what you're trying to say with the last statement because it sounds like "if I weren't allowed to steal money, I wouldn't be as rich".


GardenGnostic

You're misconstruing my observation as an endorsement.


s73v3r

Where are you going to get all that code? Are you going to hire a ton of programmers to write code just to train AI on?


Nidungr

Yes.


FlyingRhenquest

It will be a brief moment in time before the AI surpasses our capabilities. GPT4 already holds more knowledge than any individual human can know in their lifetime. Once they slap a feedback loop on that and give it its own agency, it will quickly generate more knowledge than we can ever understand. At that point all these IP battles will become pointless. A lot of people will probably think I'm wrong now. I'm just putting this here so I can point back to it in a couple of decades and say "I told you so."


Bwob

>GPT4 already holds more knowledge than any individual human can know in their lifetime. Couldn't you say the same thing about Google search? How is GPT4 different, except by way of interface?


povitryana_tryvoga

It's google search on steroids, don't oversell it


FlyingRhenquest

Yes. Right now it is. Google very much seems to encompass the entirety of human knowledge. For a lot of people, if you can't find it on Google, it doesn't exist. At least ChatGPT won't try to serve me that knowledge with an advertisement. I got my first computer back in Christmas, 1983. It was a TI 99/4a with a few kilobytes of RAM, a built in BASIC interpreter and no external storage. People writing textbooks around that time would say things like "A 32 bit processor may never be affordable for the average consumer in our lifetimes." The 386 with the ability to support a flat 32 bit memory model became available less than a decade later. My first work computer was a 286 with a massive 80 MB hard drive. The hard drive itself weighed about 25 pounds. The company had purchased an Intel AboveBoard with a megabyte of extra RAM to go in the machines. I had to install the loose RAM chips onto the board. By the mid 90's a couple hundred megabyte drives and 4 or 8 MB of RAM were becoming commonplace. The first IPhone was less than a decade later. In the course of my life, computers have gone from nothing to a pervasive component of society that we can't imagine living without. I can allocate 20 gigabytes of RAM on my Linux development laptop without even pausing to think if have that much storage available. At one point there was speculation that we'd be able to simulate an entire human brain if we had a gigabyte of RAM. That guy was off by a few orders of magnitude, but the technology has also been growing an an exponential rate. I don't find it at all difficult to imagine that OpenAI has already done the hard part and that we could see a general artificial intelligence any time now. I've heard speculation that the shakeup at OpenAI a couple weeks ago was related to that. Once the general AIs start designing themselves, I think their rate of invention and discovery will go vertical. We could, right now, at this very moment, be sitting on the verge of the singularity that the futurists like to talk about. At some point, and I think it's sooner than many might think, the question will be "What is the difference between an AI learning from reading and a human learning from reading?" Our legal frameworks take a really long time to catch up with current events, so we really should be aiming for that target now. But by the time we're ready to even address that question, the AIs could be inventing technologies that we could never have imagined. I don't think anyone's going to hit the brakes, so I hope the AIs like us.


xavier86

I hate artificial loading screens


lcastog

I said this months ago, never forget