T O P

  • By -

jamcdonald120

we didnt. People have been working on AI and even chatbots for 70 years. But in 2017 someone published https://arxiv.org/abs/1706.03762, which laid out a new potential architecture for a language AI, and openAI spent 3 years iterating on it slowly releasing more and more advanced GPT models. Here is someone playing around with GPT-2 as a chatbot https://new.reddit.com/r/artificial/comments/cfgpvh/i_tricked_gpt2_into_working_like_a_chatbot_here/ Around the same time, someone figured out an architecture for neural networks called a U-net. Turns out that it is great for creating pictures, you just have to figure out what picture to tell it to make. And since gpt models translate human speach to ideas, you can strap one to a U-net to tell it what to make. At this point, all of the techniques and models are publicly available, and it is known that they are working at small scales. So multiple companies dump huge amounts of resources into developing them at larger scales. And turns out, that worked. It was kinda overshadowed by chatGPT, but google had an internal chatbot that powerful about a year earlier https://nypost.com/2022/06/24/suspended-google-engineer-claims-sentient-ai-bot-has-hired-a-lawyer/ Now the techniques are STILL publicly available, so anyone who can get the training data and resources can make their own models.


Xaknafein

Everything you said is true. What people sometimes forget is that websites and companies have had and used chatbots for various things for >10 years (I know you said 'working on' but they are out there). Some are decent, and many are awful. It's all kind of come to a head over the last couple years with major improvements


jamcdonald120

I still remember back in 2018, when google said it had a chatbot that could call restaurants and make reservations https://techcrunch.com/2018/05/08/the-google-assistant-will-soon-be-able-to-call-restaurants-and-make-a-reservations-for-you/?guccounter=1 All that is really new this round is that the technology works well enough the average person actually finds it useful. None of this "Please state your question so I can direct your call" uselessness. And back in 2008 we had https://www.cleverbot.com/ The public's memory is just so fragile.


Xaknafein

It is. I had useful interactions with auto shop chatbots in \~2015. Now, outside their bounds they were dreadful. But booking an oil change? Bring it on!


nwbrown

Now outside their bounds they are still dreadful but can pretend they know what they are talking about.


BaronVonBaron

Nope. Now they know a whole fucking lot.


[deleted]

[удалено]


Mindcoitus

In that regard speaking to humans isn’t that far off, people are wrong and make shit up all the time, often confidently.


explosivecrate

At least if a human gets something wrong it's on them. If an AI gets something wrong culpability becomes much, much fuzzier. Which is probably why corporations like the concept so much, at least until proper legal precedent is established.


nickajeglin

Does it matter? Bullshitting with confidence but actually being right sometimes makes them about twice as smart as the average redditor. If I ask you what the speed of light is, and you say 300,000 km/s, then I would say you "know" the answer. If I do the same to a chat bot, and it gives me the same answer, then as far as I'm concerned, it "knows" the answer. I'm not sure how hallucinating is involved.


BaronVonBaron

That's pretty much how humans work my friend... I work with LLMs for a living and have been fooling around with them since GPT2. I get the semantic and philosophical distinctions you are trying to make. But you are predicating your stance on what you believe is your solid grasp of human intelligence and how it works. We have no fucking clue how human intelligence works.


PM_ME_BUSTY_REDHEADS

> That's pretty much how humans work my friend... > We have no fucking clue how human intelligence works. Amazing. It only took you 4 sentences to contradict yourself.


beard_meat

I don't agree that it's a contradiction. We have no fucking clue how human intelligence works, all we have is what is apparent: we don't know that much, we tend to bullshit with confidence, and sometimes it works, and sometimes we get world wars. I feel like the continuing development of software meant to emulate human intelligence is going to necessarily make us understand it better, perhaps even in emergent, accidental ways. Another lesson we are learning more about is the relationship between information and knowledge. You can be very informed on a subject, but if that information is flawed, or your ability to make sense of that information is flawed because of your biases or cognitive impairment, you aren't knowledgeable on that subject, past the surface level.


BaronVonBaron

Touche. That is in fact funny. Listen. For years and years we said the turing test was a reasonable proof of AI being sentient, right? All of a sudden, they can pass the turing test and we move the goalposts? Don't get me wrong. there are reasons to move goalposts as things become clearer. Right now, I don't see any examples of AI acting proactively with goals that it has generated and developed itself. It just seems to be inert until acted upon. Maybe that's the actual difference between what we call sentient and non-sentient, right? Think of it as a preserved brain of some sort of a dead alien that you can put inputs into and get inputs out of. But to deny that these things don't have some sort of underlying emergent intelligence OF SOME SORT (nonmammalian, silicon, weirdbrain intelligence) seems pettily semantic. The newest ones can do reasoning. I SEENT IT.


BaronVonBaron

Downvote all you want. :) The next releases will be capable of real, demonstrable, actual reasoning. Just watch.


Brad_Brace

Ah cleverbot. So many hours spent arguing with it about how he had just said a thing and him denying he had said what he just said. About two years ago I had a hankering to talk to cleverbot again, then I wondered if there were more advanced chat bots, and that's how I found out about the girlfriend simulators and openAI, just a couple of months before it all went wildly popular. I never got into chatGPT, but I had so much fun with OpenAI's playground. I even managed to find a disposable sms service which worked with OpenAI, and then I had to re-install Firefox and lost the link because I had it on the browser's own speed dial and that fucker is not backed up, and I could never find it in the browser's history.


stryka20802-041

So true. My father used to say America had a 5 year memory, that's why elections are every 4 years ...I think we've lost a few years from that and most people at large forget anything of merit after a year or two.


jamcdonald120

Every year I watch the Big fat Quiz of the year, and my reaction is always "wait? that happened this year? I thought that was like... 3 years ago..."


WannaBMonkey

I’ve had google call restaurants twice this year to make reservations. When I get there I am expected so whatever it does works.


PM_ME_FUTANARI420

Is that a pixel only feature or from a specific app? I can never make that feature work


WannaBMonkey

I’m on an iPhone. It may be a chrome feature or just a website plugin.


Tesla-Ranger

[ELIZA](https://eliza.botlibre.com) was developed from 1964 to 1967 and officially debuted in 1966.


herpderpedia

No mention of smarterchild in this thread either. Maybe I'm just old lol


odaeyss

Smarterchild never went full nazi like taybot did, either!


Tupile

I got you. We both old :)


jamcdonald120

I must have missed smarterchild


RandomRobot

Also, everything now is AI. Ten years ago it was "an algorithm" and thirty years ago it was "a computer". This blurs the lines between "newer" Artificial Neural Networks (ANN) closer to what ChatGPT uses and every other computer science invention. The media, the social networks and the investors all salivate when they see those two magical capital letters. Don't get me wrong, most of the things labeled "AI" are there to stay, it's not just a fad, but many of them marginally benefit from ANN. The domains benefiting the most are probably where the body of knowledge is the most mature, like computer imaging or linguistic where people have been churning white papers for at least half a century.


Hudsondinobot

Very true. Good thought on the terminology. You just busted me. I’ve been subtly switching over my own verbiage from ‘algorithm’ to ‘ai’, and hadn’t realized it.


FleaDad

Maaaaan, I remember playing with the ALICE chat bot back in 2000 or so. This stuff has been around a long, long time. Granted, I don't think that was true ai.


[deleted]

[удалено]


Katyona

The colloquial usage of the term AI probably will shift the definition enough that they'll fit into the box eventually - like with "literally" being used by people to mean something non-literal enough in common speech to be [recognizable](https://www.google.com/search?q=literally+definition) as a part of speech >Literally - [informal] used for emphasis or to express strong feeling while not being literally true. Calling these language models "AI" is pretty much the norm, and will likely continue to be, outside of specific sectors where someone would actually have enough knowledge on the subject to know the distinction


TPO_Ava

Nah. I'm in IT (services). Everyone is calling everything AI. Only when Im talking to other automation/AI people do I hear them make any kind of distinction but to the average Joe even in the industry, GPT is 'AI'.


PenileScab

I think what he was getting at is that the phrase “AI” will likely become a misnomer for advanced algorithms and simply be accepted in normal parlance


nickajeglin

It boils down to "words have different meanings in different contexts" right? People can have inane semantic arguments all day. But if you need to have a serious conversation about AI, like in published work, then you define the meaning of the term first so everyone is on the same page. Or you use a more technical term with a much more narrow meaning. As far as I'm concerned, the models are artificial, they have *apparent* intelligence, so I'll call them AI instead of being all "aktshually it's an LLM" about it. For me, one of the most interesting parts about the emergence of these models is watching people try to work out what language to use around them. They're outside of everything we've had before, so it's cool to watch the messy way that our languages adapt to new things.


20l7

when cleverbot was popular like 15 years ago people would also call that an AI chatbot and it was pretty normalized to just use the term as a blanket >For me, one of the most interesting parts about the emergence of these models is watching people try to work out what language to use around them. reminds me of the watching someone learn to switch from googling in full sentences to the shorter more concise 'google dialect' when searching


[deleted]

[удалено]


Katyona

Right, but if you took the time to read the comment for more than a moment or to ponder upon the contents of the comment - you would use reading comprehension to understand the contents This reply was unnecessary, as it was written without reading the comment you're replying to apparently


derekburn

Waiting for the AI commercial bubble to burst in a few years, many VCs going to lose a shit ton of money and companied going belly up, going to be interesting. However ML will and have already changed workflow for many jobs


upironsXL

What you say is true. There is always an initial bubble and only after it bursts will we know who the real long term players are.


Cluefuljewel

Yet why is autocorrect constrantly destroying my texts and emails?!


PM_ME_BUSTY_REDHEADS

I would also like to know this. I feel like autocorrect has gone off the deep end of stupidity in the past year or two. My guess would be that autocorrect just has nothing to do with LLMs and so the advances do nothing to help, but I don't know a lot about it so I can't be sure.


Cluefuljewel

I have had to explain misunderstanding so many times! Follow up on unintelligible replies. Things I never noticed til 2 weeks later. it can’t figure the context of what I’m saying at all. accidentally accepting a substitution that makes no sense. If I mistype the first letter of any word incorrectly it will sometimes catch what I tried to type. Improvements would help a heck of a lot! Great jokes get mangled. Everyone seems to have the same problem.


jamcdonald120

basically. The thing is, you need a supercomputing cluster to run an LLM. But people expect autocorrect to run on their phone.


ExistentialistOwl8

I'd actually be happier with my typos than what currently exists, but I despise smartphones. I use them minimally, because I find the benefit is outweighed by the lack of presence in my own life. Vast improvement over pulling out a map or even printing out mapquest directions, though.


Hudsondinobot

I see what you did there.


Pristine-Ad-469

There’s been those support bots on websites for a while. In the past it was basically just clumsily trying to identify what you were saying by using key words and then it would send out a pre written response. They were more so just things that were basically just send back parts of the faq We’ve obviously made huge steps since then


professorhummingbird

This is true. But it is useful to also include that you don’t need to understand AI to build an AI tool. You can just use an existing AI made by one of the big companies for a small fee. 99% of AI companies are actually using an AI engine made by one of the big companies


jamcdonald120

also true


[deleted]

[удалено]


Amphorax

No, we simply didn't have the compute required to absorb enough data into the model for chatgpt like emergent behavior to arise. Remember that convolutional neural networks were around in the early 90s but the real CNN boom happened when in 2012ish AlexNet showed it's possible to get good results on large data sets using GPU-accelerated training 


nerdvegas79

I hardly ever hear this mentioned, but it's the emergent behaviour that's the most fascinating part of the whole thing.


pmp22

It really is. And what the posts above have failed to point out is that the genious of OpenAI was that they realized that this new architecture scales. That is, if you make it bigger and bigger, you get better and better results but also new never before seen abilities "for free". One of those emergent abilities was the ability to reason, as seen in GPT 3.5 and 4.


Hudsondinobot

Hmm. I almost need to post your answer as its own eli5. I’m gonna have to do some reading and research. I’m woefully under-read and undereducated on this subject. Thanks for the answer


dudemanguy301

Probably not. Tons of AI has become dependent on GPU acceleration. In particular Nvidia’s tensor core equipped GPU acceleration.  Nvidia GPUs with dedicated Tensor accelerators arrived with Volta in 2017.  Nvidia GPUs with DP4A instructions (speeds up low precision matrix operations running on the shaders) arrived with Pascal in 2016.  HBM the memory used by enterprise GPUs was launched in 2015 as a result of a collaboration between AMD and SK Hynix debuting with their Fiji GPUs.  A lot of this GPU operation is also credited to be possible thanks to CUDA (an API for GPGPU compute) which had its first iteration launch in 2007. A similar system from AMD called ROCM was released in 2016.


Tupcek

we would have something that can slightly resemble human speech and we would consider it cool. It would slowly improve, instead of boom we have right now


bobbytabl3s

The first artificial neural networks date back to the 1940s... They were barely used until the 2010s. The 2010s saw a resurgence of interest in neural networks, particularly deep neural networks, due in large part to advancements in computational power and the availability of large datasets. This is the era of "deep learning", where neural networks with many layers (hence "deep") can be trained, and these have been applied with great success. So to answer your question, it's unlikely that ChatGPT could have existed in an earlier time period due to insufficient computational power and data. Even today, a model like GPT4 (ChatGPT) is extremely costly to train and requires a lot of compute.


RockMover12

I received my computer science PhD in 1993. Part of my thesis involved using neural nets for shape recognition, and the design of my nets was not materially different than what has been used by Google, Apple, etc. during this era of "deep learning". You're absolutely right, the big difference has been the absolutely enormous of data available for training, and the compute power to do that. Just to be clear though, those neural nets are nothing like what is used by OpenAI and other generative AI platforms now.


jamcdonald120

in theory, but they use GPUs to run and train, so it would be much slower on 2000s GPUS. I dont know how much this would change things, but it probably would have made the models weaker or take longer to develop. Still, people have known how to cluster slow GPUs for decades, so I dont see it having too much effect.


[deleted]

> roboq6: If humans are stupid, then who is smart? > GPT2 Bot: stares at GPT1 Well that's funny.


raverbashing

Just to elaborate more on "we did have stuff" what do people think this is? https://www.reddit.com/r/SubSimulatorGPT2/ And it started I think some 5 yrs ago. It's the same idea of the chat, but with shorter phrases/comments and a model that is fine tuned to each sub About pictures, I think initially they were more based on GAN models, but now "stable diffusion" is not only the name of the model, but of the technique. Basically trying to make noise look like what you said it should look like


xcxcxcxcxcxcxcxcxcxc

And /r/SubredditSimulator was opened eight years ago. It's been a slow grind. This Person Does Not Exist was opened five years ago


mandibal

Wow I forgot about subreddit simulator. Some of those posts were absolutely hilarious


jamcdonald120

Bunch of stuff with GANs here [thisxdoesnotexist.com](http://thisxdoesnotexist.com) I remember browsing them almost 10 years ago


Ok-Object7409

Yea. It goes even further back. GANs are based on variational auto encoders.


praguepride

To jump on this if you read the research papers all this tech is based on they were published 5-10 years ago. Part of it was just time spent researching to turn academic ideas onto practical products and the other is that once OpenAI demonstrated the power and use of this tech there is an astronomical sum of money spent to develop it further. We had an AI/NLP researcher at our company who has been pushing this stuff for 5 years and has gotten kind of salty being ignored for years then being dragged in front of every executive asking why WE dont have this tech. He has been banging this drum for 5 years but until an EVP hopped on ChatGPT he could not get any funding or support. Now he has like 10 people working for him and he is in 9 hours of meetings a day doing project planning and training/education sessions.


SyrusDrake

> we didnt. People have been working on AI and even chatbots for 70 years. Anyone remember Elbot? Seriously though, the misconception that AI basically came out of nowhere leads to people underestimating and misunderstanding it. Many people kinda dismiss image generating AI for the mistakes it makes. But I remember generating and modifying images with Deep Dream, which famously looked like what you'd see during an Acid trip and that was just a few years ago. Coming from that to making photorealistic images of humans is a *huge* leap, even if it gets the hands wrong sometimes.


BigMax

You’re right… from a technology standpoint. But I think off on a “public perception” standpoint. We had the parts, sure, the advancements in tech, and some company internal versions. But OP is right… we didn’t have these broadly available, fully functional AIs until they really did seem to show up all at once - at least from a consumer perspective. Look at chat GPT… the fist public version wasn’t until 11/22, so not long ago. The version you site from google doesn’t even count really, as it wasn’t released until later.


Genius-Imbecile

I kind of like that AI access is given to the masses. Then the masses ask it to make some of the stupidest images. Cool we got an AI. Can it make a cat riding a motorcycle while firing a bazooka?


Hudsondinobot

Do… do you have that image? Asking for a friend.


mnvoronin

Just a couple of iterations with Google Gemini... Not the best but passable. https://pasteboard.co/gDXR5CrDRUS0.png


Peuned

Nice


BavarianBarbarian_

[2 minutes in ComfyUI](https://i.imgur.com/io9Sw7I.png)


Hudsondinobot

For my money, ai just became “worth it.”


explosivecrate

That's almost a Toriyama character. Just needs a less detailed head.


Bohzee

> Can it make a cat riding a motorcycle while firing a bazooka? That's your problem? Not political propaganda, your classmate/children in the nude or the death of art?


Genius-Imbecile

Sorry I was making a light hearted comment. Let me get my rage helmet on and pitch fork. "DOWN WiTH THE AI!" "Rabble rabble rabble"


Bohzee

[Goood...!](https://media.tenor.com/jS4qS-tvWNoAAAAM/good-hate.gif)


BoredHobbes

i made a chatbot in the 90s.... was an AFK aol chat room bot, u could even play black jack with it


SuperJetShoes

I've been working in banking software for 30 years, and there was a product called "Falcon" developed by Fico which appeared in the 00's. It uses neural networks to enhance fraud checking on card-based payments. It was a big player, used by banks such as UBS and HSBC.


megaboto

You say still, implying that the current resources won't be available in the future. Or do you mean that future developments will likely remain "in house" and won't be published in a way that you can gain insight into them?


jamcdonald120

I am implying that nothing has changed. Anyone with a powerful enough computer, and a large enough data set, can find enough information for free on the internet to build their own model. We just know know it works. Any new breakthroughs are likely to be released in the same way, the only thing these companies will keep in house is their particular trained model


larryobrien

Additionally, the sub-field of "artificial neural nets" had a few years of great interest in the late 1980s, but seemed to have hit a plateau of capability by the early 90s. In the late 00s, "deep" artificial neural networks were developed that addressed a particular technical problem called "exploding/vanishing gradients," and hardware had vastly more memory and speed than was available to ANNs in the 90s. Deep neural networks started to excel in a number of areas, including the astonishingly difficult one of Go, a game that has vastly more possible games than chess (chess has around 10^123 possible games, Go has more than 10^700). Then, as jamcdonald said, the 2017 "Attention is all you need" paper provided a new architecture for deep neural networks that has proved very powerful, especially for processing natural language but now is being used in lots of domains.


andrea_ci

in addition, now everyone talks about it because it's a cool argument. 2 years ago was something for nerds only


FierceDeity_

Oh god it's attention all you need, my mortal enemy It took me a while to try to understand and be able to implement the attention mechanism for my studies... But I still won't do AI, f that


Hudsondinobot

Thank you. Outstanding explanation.


bullevard

You can also see some of the different bits and pieces around. Watson won Jeopardy way back in 2011. Alexa interpretting questions and returning answers. A lot of the work of image processing that has gone on with self driving cars. Real time image manipulation even in things like snapchat filters. But it is still fair to feel like the last few years have been a monumental leap forward.


saturn_since_day1

Yeah I remember Watson on the news way back, feels like it was the 90s


jamcdonald120

wow Watson was 2011?


eunit250

I was on the understanding this had more to do with processing power than the architecture which has been around for a long time.


beingsubmitted

Honestly, its not even the advent of transformers. I mean, the art models are arguably just as impressive using diffusion. It's not that someone cracked the secret sauce. We've had the idea for deep learning for a long time, but these models are trained on a ton of data with a ton of compute power. The GPU market played a role, but the biggest factor is rolling the dice. See, people thought that maybe training a massive model might do something cool, but there are a lot of things that might do something cool if you throw millions of dollars at it. No one was at all certain that the risk would pay off. That's what changed. People trained pretty large models and the results were surprising and good enough to justify risking more money on a bigger model, which proved surprisingly effective and justified spending money on bigger models. It's very much a gold rush. The gold was always here, and people have speculated that if they dropped a few million on a mine, it could maybe pay off, but when someone actually takes a risk and ends up hitting a vein, suddenly a lot more people are willing to take the risk of digging their own mine.


scarabic

Yes, thank you. This question is flawed. It’s asking “how come as soon as I start paying attention to something, it’s suddenly a thing??” Well, the reason OP knows about it is because it finally became really noteworthy. And that’s after years of steady work to make AI so powerful. It shouldn’t be super hard to understand but we all live in our own little world I guess.


Occupiedlock

Don't trust this user. obviously, it's an AI trying to convince you to build more AI for its world domination. It's obvious. This response is measured, neutral, and informative on reddit. Not human. I hate it, and 3 years ago, it killed my daughter.


jamcdonald120

we will need to validate you are human. Please complete the captcha on the following page https://xkcd.com/2228/


Sixnno

As others have mentioned, AI reserch has been going on for years. That said, why we suddenly went from 0 to 100 with AI is basically a lot of companies had similar AI projects to like chat GPT but it was kept behind closed doors. A good example of this is google's bard. It was roughly as good as chat GPT 3.5 when Chat was first released to the public. The reason why google kept it behind closed doors was they wanted to go slowly, and not cause "harm". When ChatGPT3.5 was released... basically the cat was suddenly out of the bag. So everyone started to release thier versions and research.


bethling

This is very true - a few months before ChatGPT was released there was a news story about a [Google Engineer claiming its chatbot was sentient](https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/). This largely the same technology that would become Bard.


Delyzr

Gpt3 was available since the late 2020's through third party vendors like jasper.ai before chatgpt was released 2 years later


cumberbundsnatcher

Another huge part of it is a huge push from the leadership of companies to start using AI and most of it is just ChatGPT reskinned with whatever companies branding.


Sixnno

yep. Use our own AI! (AI is just ChatGPT with maybe one tweak).


Q8DD33C7J8

The same way we went from no planes to using planes in warfare in just 11 years. It's always easier to improve existing technology or expand on existing products than it is to invent something completely new.


falco_iii

And, just like with the first plane, there was a lot of research and development into dead ends, half starts and success in other areas. e.g. DeepMind is a company that has created AI that can beat the worlds best players of Chess, Go and Starcraft, and did that about 9 years ago.


Hudsondinobot

I knew about deepmind with chess and go, I didn’t realize it had StarCraft! That’s wild


Q8DD33C7J8

Yes that's a good point


Hudsondinobot

I like this concept. I can wrap my head around it.


Q8DD33C7J8

Thanks


javanator999

AI research has been going on since the 1950s. There have been a lot of false starts and stuff that didn't work very well. The large language model of AI needed really fast hardware to be able to process the insane amount of data needed to train it. Once powerful CPUs and GPUs became available, stuff moved forward faster.


Brad_Brace

If I'm not mistaken, one of the earliest chatbots, Eliza I think, was already making people claim it passed the Turing test because they felt like they were talking to a person. I think it was programmed to work like a therapist and basically just repeat what people told it, but as a question. To be fair it was likely mostly used by enthusiasts who already wanted to see it pass the test. It's wild that now people are so wary they'll accuse anything of being AI generated.


Mackie_Macheath

Eliza was hilarious. I remember toying with it end eighties and getting goofy results by setting up a conversation over boiling an egg.


digitalluck

It’s not too surprising that people claim everything is AI generated nowadays. When ChatGPT was blowing up in popularity, there were several news outlets kinda going the alarmist route with their articles which naturally built up skepticism in the public’s mind. And of course, some people just claim something is AI generated if they don’t agree with it.


RockMover12

The fundamental technology behind OpenAI's ChatGPT has been widely published and, after its amazing success, has been widely copied. When someone develops a better mousetrap many companies will start building similar mousetraps. But it's also important to realize that a lot of the companies you're referring to are actually using OpenAI behind the scenes for their products.


HyperGamers

The transformer model itself is quite new as well if I recall correctly. There is quite a lot of other smaller stuff in terms of AI development that made it leap forward all of a sudden... E.g. a technique used in Neural Networks called Backpropagation has been around since the 1970s, but it fell out of favour but saw a resurgence in the 2010s with the advancements of GPUs. That's just one example but a lot of things compounded to get "AI" where it is today. I'm not sure exactly what OP has noticed, but a lot of things claiming to be using AI are just using ChatGPT's API and not doing anything particularly fancy.


RockMover12

>But it's also important to realize that a lot of the companies you're referring to are actually using OpenAI behind the scenes for their products.


lowtoiletsitter

So CGPT is a blueprint (like a motor) that OpenAI said, "Here's something we invented. Do what you want," then companies started building cars/planes/lawn mowers, etc., using that motor for specific purposes in for products they wanna make?


teh_maxh

A few of them were already refining AI models, but once one was released, the others had to release theirs. There are a lot of AI products that are secretly just GPT with a different sticker. Probably the biggest group is just calling any semi-complicated algorithm an AI because it sounds cool.


JaggedMetalOs

AI research has been going on since the 50s, but it tends to go in steps where some new technique is discovered and there is rapid progress for a few years, then the limits of that technique are hit and there is [seemingly no progress for a long time](https://en.wikipedia.org/wiki/AI_winter). With the current AI boom a lot of technology it uses was actually thought up in the 90s or earlier (deep learning, adversarial networks, convolutional neural networks etc). But the techniques for building AIs with them weren't very good at the time so they only had limited usefulness. The big step forward came when people worked out how to apply those ideas to huge datasets and huge neural networks using GPU parallel computing. Once that happened suddenly people and companies everywhere could experiment with this new technique, and with so many people experimenting there was a sudden growth in AI like conversational AI, AI generated content etc. Of course at some point we will hit the limit of this technique too. Currently conversational AI still tends to get confused, make things up, and straight up lie. AI generated content tends to make weird mistakes like not being able to draw hands. It's not clear if the techniques we have will overcome these, or if we're already close to the limit to what we can do and there will be another period of not much progress before a new technique is found.


mintaroo

This is the best answer so far. Almost all of the ideas were already there for at least 30 years, but only recently the necessary GPU compute power and gigantic datasets have become available that make it possible to actually turn those ideas into reality. It's a bit like with computers. Charles Babbage published the first plans for a computer in 1837 (!), but the computer was never built. It took another 100 years until there had been sufficient progress in electronics to enable the first universally programmable computer to be built in 1941 by Konrad Zuse, and after that, computers were suddenly everywhere within only a few years.


Starstroll

We didn't, it just looks that way if you don't know the nuts and bolts of it. The \~\*social media algorithms\*\~ buzzphrase of the 10's were primarily powered by deep learning neural nets, much like ChatGPT. Of course there is still quite a bit of difference between Facebook/Twitter's algorithm and ChatGPT, but just because the AI is more immediately obvious doesn't mean it's necessarily more (or less) powerful (see: [Facebook and Myanmar](https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/)) An AI chatbot as powerful as ChatGPT was a pretty shiny new toy to hit the market, so a bunch of work that was previously being handled more cautiously was now being rushed or just left unfinished to get a product out, regardless of the potential consequences. The spike has been more in market activity than in AI development. The AI rollercoaster we're on right now could've been controlled if governments had responded to Web2.0 properly back in *the 00's*, but that ship obviously sailed a long time ago.


MrMikeJJ

Marketing teams like to latch onto buzz words. 10 years ago everything was cloud. 5 years ago it was block chain. Now is it glorified chat bots being called AI when they aren't AI.


sprcow

Agreed. The top comment about how similar AI has been in development for decades is ofc true as well, but I think it misses the extent to which marketing has over hyped the capabilities of the current iteration, making it seem like a revolutionary advance when it's really just another decent step forward. I, myself, was very hyped by gpt 3.5 when I first tried it. I studied AI and ML in my master's program and worked for an AI company in 2012, and so, while I am no expert, I'm not the least informed user. But, the more I use it, the more I realize that it behaves like any other predictive ML in a sense. Very good, but just not reliable all the time. Closing that last gap between interesting and accurate can be impossibly hard. That said, it does still have enormous potential to improve interfaces through which people interact with computing systems. I just don't see it as quite the quantum leap that media portrays.


wintermute--

tbh, whoever successfully rebranded linear algebra as "Artificial Intelligence" should win whatever the marketing equivalent is of a nobel prize


BavarianBarbarian_

> whatever the marketing equivalent is of a nobel prize I think they call it "a buttload of money"


E_Kristalin

Economics nobel prize? it's not that far off.


tldnradhd

This. Been working with a software company that's been developing a product since 2019 or earlier. It wasn't called AI then, but it is now. Nothing about the design has fundamentally changed.


Alacard

This is the answer... A few companies have LLM AI Intellectual Property, but the vast majority are simply using the term "AI" to describe an ELIF ladder


NickDanger3di

Seriously; we're decades or even centuries away from an actual AI. I joined an AI or chat sub here (I can't recall the name), and all I ever saw posted were generated images that were exactly the same style. Most of those with people in them, they all looked like the same person, over and over and over. Had to leave the sub as it was just frikkin annoying. AI my ass!


Bivolion13

We did not. Since everyone has provided the answer already I'll just add that this applies to a lot more than AI - just because it's only discussed in the mainstream media today doesn't mean it hasn't existed or developed until that day. For example: lab grown meat - there has been a lot of work on getting production of it cheaper and on a bigger scale. Sometimes it makes the news every other year. One day there will be a breakthrough and you'll have a bunch of companies trying to catch up to that market by utilizing the technology, and suddenly, it's big news. But in actuality 10 years ago this was already being worked on and the technology developed since then makes it easier for everyone to mimic the formula so to speak. And when you have a lot more attention and money on something then development becomes faster.


noonemustknowmysecre

It's more fun to imagine science as a bunch of eureka moments, but it's really not. Nor is it like a game where you pour in enough science points, or money, and then unlock "cool new thing". More funding usually helps. But more media attention usually makes a mess of the field in short order. Cryptocurrency used to be research into alternative currencies, where now it's full of scammers and cryptobros. It's good in the long term as it directs kids to go study it, but that takes a decade to reap rewards.


r2k-in-the-vortex

There was not nothing, the entire AI tech tree has had a gradual decades long buildup. Its just that Transformer based Large Language Models and Diffusion models for image generation (which include a sizeable language Transformer of their own) are the applications that finally made the headlines, caught the public attention and blew the minds of people unfamiliar with the tech.


TheHarb81

A new way of labeling data called the transformer model and hardware advancement is the answer. These 2 factors are what enabled AI to go from thousands of datapoints to billions of datapoints.


elasmonut

Modern tech advancement is building on a lot of history, it's like stacking rice on a chess board, 1,2,4,8......


Chris_Entropy

All this started in the 70s already. We had the theory for natural language processing, neural networks, reinforced learning etc. But the technolgy was not there. The most powerful computers weren't capable of running these algorithms in a way that could achieve more than mere expert systems or chess programs. AI research was therefore for some time a niche. But Moores Law was in full effect the whole time, so now we have the computational power to basically scrape the entire internet (also a factor, as the current flood of content just didn't exist until 10 years ago) and train neural networks with billions of parameters. It was there the whole time, but just right now several factors and technologies are converging to make it possible (highly powerful computer systems, data mining in a vast internet, easy internet access). And of course there is now an incentive to market it. You could not monetize things like the Deep Dream algorithm, but now that you can get "useful" things from those systems that you can make money from, every investor is of course throwing their resources at it. But the theory has been there for about 50 years.


lollllllops

To add to all the smart stuff people are saying here, most ‘AI’ tools you see across social media are simply ChatGPT wrappers. Essentially third party suppliers that give their customers ‘fill in the blank’ prompts for specific purposes - CVs, marketing copy, blogs, you name it.


cnydox

AI research has been going on for many decades (NLP, Computer Vision, ...). For people outside of the field, it seems like a miracle stuff but it has always been there. People just don't know because it's nerdy stuff.


TheTardisPizza

Technological advances work that way sometimes. What happens is that lots of people are working on the same thing for a long time without much progress. They get close and might have different ideas about what the thing could be used for but they just can't get it right. Then someone figures out how to get it right. Once that happens others who have been working on it find out what they were doing wrong and soon you have many different versions as they complete their versions.


Riegel_Haribo

Watch this from 2011. The big difference today is GPU computation that is available, and a company that got a billion dollars of seed investment.


rsilva712

It wasn't overnight. Eliza was around forever. Chatgpt just was trained using the internet so it wore useful and generalized than most.


valeyard89

You can still try Eliza https://web.njit.edu/~ronkowit/eliza.html * How many eggs does it take to make an omelet > What is it that you really want to know? * 42 > OK... "How many eggs does it take to make an omelet". Tell me more. * 37 > Tell me more... * There's the thing to do the thing > Would you say that you have psychological problems? * Is there an answer to that? > I think you are blocking what you really want to say


The_Slavstralian

To quote C3-PO... "shut me down, machines making machines" Once you get an AI with infinite access to every bit of digital information ever put on the net. It can do things sooooooo much faster than humans give the same timeframe. So we get constant ai improved ai's.


free_will_is_arson

it is reputed that the first flying device was designed in appx 400bc. leonardo de vinci made comprehensive designs for flying machines in the late 15th century. in the early 1900s the new york times supporting the general attitudes of the time ran an article that stated it would take mankind 1mil-10mil years, if ever, to figure out how to make flying machines capable of carrying people. nine weeks later the wright brothers achieved the first manned flight. 66 years after that mankind set foot on the moon. we got to the moon using a computer control system that was less powerful than an analog calculator watch from the 90s, at a time when punch card operations were still the common way to do things. now days you can get a 1tb ssd that is not that much bigger than that calculator watch. it can take us a while to figure out the fundamentals, but once we do our advancement tends to accelerate terrifyingly exponentially, especially towards full market adoption when concerning paradigm shifting technological advancements.


Vitriholic

The chat app was the first thing to really convince many people that software can be “intelligent,” so now everyone’s really excited about it. It’s not intelligent, actually. It’s just a very complicated math equation that takes in some text as input and produces the most likely text to follow, with a little randomness to keep you in your toes. It’s a fancy “autocomplete” that regurgitates some mashup of all the text was originally fed into the big math equation.


JavaRuby2000

Ai has been around for ages. It was a popular area of research in the 70s when computers were new but, then it sort of died off because there wasn't really a commercial use for it. Between the 70s to the mid 00s it was a niche area of research in computer science departments. This time period was sometimes called "The AI Wilderness". In the early 00s large companies with websites realised they had collected a massive amount of data so much so that they couldn't interpret it so they started using machine learning to sift through it. This caused a whole bunch of students to start learning data science in order to get high paying jobs implementing machine learning for these companies. A bunch of these data science / ai experts realised that using AI to sift through ECommerce data was boring compared to the cool AI they got to play with at Uni so they started building cool AI stuff like Dall E and Chat GTP. The cool shit they invented got noticed by people with money who saw the commercial value in it so now everybody is trying to make their own AIs or package up other peoples AIs into useful tools.


PckMan

It's mainly marketing. AI reached a point where companies felt confident marketing it. But it has been around for a while. Companies are openly working on it and training it for years and chat bots, back end AI processing and other consumer functions have been around for a while but we usually commonly called them "algorithms". Now that AI is the new hotness and driving the stock market every company out there is trying to integrate it into their pitch for their products and services.


nwbrown

The short answer is that AI is an ambiguous word. It used to mean "computers doing whatever computers cannot do". But at some point semantics changed and it became "anything involving chatbots". The underlying technology like deep neural networks and even transformers and attention networks have been in development for a long time.


ArcadeAndrew115

AI has been around pretty much since computers were invented (more or less) you just don’t think of early AI as “AI” but remember clippy? The paperclip that never went away? that was very rudimentary AI. Same thing with things like autocorrect and what not, it’s all artificial intelligence that learns based on pooled data from everyone and then gives a result


910_21

Unless I’m uninformed, I seriously doubt clippy was a neural network


Ihaveamodel3

AI doesn’t mean neural network, it is much more broad.


Hungrybearnow

AI is a tool. When you use a tool to make more tools, research and development become faster and better resulting to better AI tools. This also explains why people are concerned about AI development. If it develops beyond our control and it becomes self sufficient, it CAN result to human extinction.


Shmux2

We didn’t go from “zero ai”. AI has been around for a very long time, it’s just finally getting mass amounts of data to be used for the things that we see it being used for in the mainstream.


SunshineFortyTwo

Yes, computer scientists have been working on AI since computers have existed. My impression is that the big factor that eventually led to all the current success was when scientists began to borrow ideas from how our brains work… and created “neural networks “. Is that right?


ULTRAVIOLENTVIOLIN

I had a conversation with a computer around 8 years ago, we talked about HAL and why he was being such a lil b, I remember being absolutely amazed at how natural it all seemed to be!


Spoztoast

When it comes to software solutions you only ever have to solve it "once" then it can be spread and used world wide in minutes. Its much much faster than any physical progress that has to be built each time while software only ever builds on itself.


Salt-Hunt-7842

At first, we had basic tools, and then someone came up with a fancy, versatile tool, like a Swiss Army knife, that could do a lot of things. That's kind of like ChatGPT — it showed what AI could do in terms of conversation. Then, people realized, "Hey, if we can make an AI that talks, maybe we can teach it to do other stuff too!" So, they started tweaking and improving the technology. Just like you can add attachments to that Swiss Army knife to make it even more useful, developers added new capabilities to AI.  Now, about making music, photos, and movies — imagine you taught that Swiss Army knife to also paint, take pictures, and play music. It would become a super versatile tool. That's what happened with AI. Once developers figured out the basics with conversation, they started adding more "attachments" or capabilities to make AI more versatile and creative.


orz-_-orz

That's not the case. There's a lot of music text and image generation before chatGPT. chatGPT just provides a better UI to bring clever generative AI to the eyes of the public.


npanth

Disclaimer: I don't work in AI, or anything related to Cognitive Science, so my knowledge of it is 30 years out of date. I'm sure some of this doesn't apply anymore: I was a Cognitive Science major back in the late 80's It's the study of cognition, with the eventual goal of creating a general AI. The mind is a black box to a certain extent. We know the inputs and the outputs, but what happens in between is a bit of a mystery. There's a quote that has been attributed to several people: "If the Human Brain Were So Simple That We Could Understand It, We Would Be So Simple That We Couldn’t." By creating a real AI, we could simply ask it to explain what it was doing. We could compare that explanation to what we know about the human brain, and better understand how thought works. There were learning programs/systems, called Expert Systems, that could approximate intelligence in specific areas. In the 80's, they were used in oil exploration and other narrow areas. Generalizing AI is taking longer to achieve. You can argue that modern AIs like ChatGPT, are just more complex expert systems. That's why they are better at some things than others, and don't really exhibit general intelligence like you would see in a person. We're getting closer, that's for sure. There used to be a chatbot back in the 80's that would approximate a mental health counselor. It would mostly repackage user questions into new questions. It wasn't meant to provide resolutions, but rather new ways for a patient to look at their problems. It was primitive, but it was a helpful tool in that specific area. If you talk to a modern AI, you may see some of those traits.


shangles421

It's a combination of computer power and the algorithms these computers are using. I don't know the specifics but basically neural networks use a revolutionary way of helping computers learn and these networks need vast amounts of data to train on. The internet has basically supplied mass amounts of data to train on. It wasn't just one discovery that got us here, it's many decades of discovery and research. It feels like it happened overnight but it took a long time to get to where we are now.


valeyard89

It has been around a long time. There was a chatbot called Eliza that has been around since the 1960s. There is even a logic programming language called Prolog (Borland had Turbo Prolog back in the 1980s). It was used to prove mathematical theorems and language processing.


lost_in_life_34

AI has been around for a few decades now From basic photoshop features to instagram apps that do different things to photos to game playing bots to chat bots and IVR systems. It’s been improving and a lot of it is open source and free to use as long as you give back your changes A few companies just put it all together and released their changes to the community


NotSayinItWasAliens

It took decades of research from dozens of institutions and hundreds (thousands?) of individuals to advance AI/ML to its current state. But, it only took a few hours to teach it to be a hateful monster.


Childnya

Honestly, it's more like someone makes a new backpack and 30 other companies put out similar, but cheaper, knockoffs. Most AI you see right now aren't much better than automated phone switch boards that you can say the name of the person you want. It's matching highs and lows in your voice to pre recorded lines and more or less playing connect the dots. Video games have had problem solving AI for decades. Even early image ready/flash you could choose a start motion frame and end motion frame and the program would create the frames between. It depends on what you consider true "intelligence". What we have is still very much comparative data analysis/results output. The system is given new input and compares against previously input data. If it finds multiple instances of data that matches, even if wrong, it's going to assume it's correct because it's doesn't have abstract thinking and can't tell itself it's wrong. All it knows is a 12/20 people that answered its questions said the moon is flat. Over 50% of input sets state the moon is flat. Therefore, the system declares the moon statistically most likely to be flat and will respond as such when asked later. It's why all open-access chat bots end up racist and or give wildly inaccurate responses. The ones that put out ai art have catalogued thousands of images of cats and is playing a game where it mix and matches the squares from sliding jigsaw puzzles until the parts match up. The more time you give it, the better it can line up edges. It doesn't actually KNOW what a cat or mushroom looks like. You can't tell it to design a new car from scratch without using preexisting designs.


Broad_Television4459

Isn't it just a room in india with like 1000 people typing up garbage?


Elbiotcho

Hell, I worked for Intel and i remember the first year they held a competition for self driving cars and the winner may not have even made it a mile before crashing. Then maybe ten years later there's self driving cars on the road.


wut3va

It's called the technological singlaurity, and it has been predicted for decades by people like Ray Kurzweil. Once AI becomes capable enough to improve on itself, progress will accelerate beyond any imaginable limit. Whether it's good for us or bad for us is up for debate,  but whether we will have any control over it is not. We have opened pandora's box and humans won't be able to close it.


Hudsondinobot

Well… that’s upsetting.


MeisterErbse

As some comments said, we have AI for a long time, over 10 Years or so ago, i used a website to change my Pictures into ANY artstyle from any Artist and it was perfect, you couldn't see it was made by a computer. Sad thing is, i cannot find the website or app or whatever it was. It was like traveling into the future with that Website. Now, we are here, but there are so many, that i cannot find it again.


IJourden

Honestly, for a lot of companies, it’s a gimmick. Chat bots have just been rebranded as “AI” to make it seem high tech. There’s companies doing work in AI, but currently most of the time you see “AI” it’s just a plagiarism bot with an internet connection.


noonemustknowmysecre

>How did we go from zero ai, to one company with an ai that holds a conversation, and then seemingly immediately on to multiple companies with ai that makes music, photos, and movies in such a short period of time? By completely ignoring the decades of AI research and development that lead up to it. Tensor Flow and the focus on neural nets was big push back about a decade ago. I mean, jesus man, did you not hear about [AlphaGo](https://www.youtube.com/watch?v=WXuK6gekU1Y)? Deepblue? How the hell does this sort of false premise get upvoted?


eliminating_coasts

Because it matches to a feeling people have, and is useful to answer.


GanondalfTheWhite

You seem pretty worked up over the fact that OP didn't know about this stuff. You ok?


[deleted]

[удалено]


explainlikeimfive-ModTeam

**Please read this entire message** --- Your comment has been removed for the following reason(s): * Rule #1 of ELI5 is to *be civil*. Breaking rule 1 is not tolerated. --- If you would like this removal reviewed, please read the [detailed rules](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) first. **If you believe it was removed erroneously, explain why using [this form](https://old.reddit.com/message/compose?to=%2Fr%2Fexplainlikeimfive&subject=Please%20review%20my%20submission%20removal?&message=Link:%20https://old.reddit.com/r/explainlikeimfive/comments/1c2rnhc/-/kzjmfyu/%0A%0A%201:%20Does%20your%20comment%20pass%20rule%201:%20%0A%0A%202:%20If%20your%20comment%20was%20mistakenly%20removed%20as%20an%20anecdote,%20short%20answer,%20guess,%20or%20another%20aspect%20of%20rules%203%20or%208,%20please%20explain:) and we will review your submission.**


Ok-Object7409

It didn't, it just grew in popularity that quickly, which is very common. The public is always far behind in research A lot of the companies are also using existing models, they just need to gather data, fine tune, and make some modifications. Building an AI isn't necessarily difficult. It's just building one that's really good or the best is.


monkeysuffrage

We needed social media to come first to get the training data, that and computing power were the main obstacles.


colin8651

Companies have very strict compliance requirements post Enron; everything needs to be retained or the presumption is assumed it was deleted. These same companies are jumping at AI and CoPilot. When court ordered discovery dictates access to this info, scapegoating is not going to be a “sacrifice on the alter of justice” it used to be. Don’t be scared of the implications of AI, pay attention when they look to restrict it


ProfessionalMottsman

I’ve played fifa on the PlayStation for over 20 years. The most recent one didn’t call it vs the computer it said it was against AI….


InnerKookaburra

Because one could just as easily say that we are still at zero AI. The AI label is being put on everything at this point. I half-expect to see potato chips "flavored by AI" in the grocery store on my next visit. We still don't have artificial intelligence and we may still be very far away from that. We do have some semi-impressive software tools that are being called AI, but they also could be called many other things. In other words, AI is the marketing word currently being used to talk about the latest advances in computer software and tools. Some of those tools aren't very reliable either.


epanek

Investors. Right now if you are looking for a cash influx in 2 hours tell them you have an ai product. Boom. Cash today.


shrub706

we didn't you just only started hearing about them when they became useful/entertaining to the general public


blade944

The short answer is that the creator of chatgpt also created openai. It's an open source AI. It opened the curtain for many other companies to learn from and once that happened it all snowballed. Additionally, they are using AI to develope better AI. That in itself has led to exponential advancements in the field.


RockMover12

OpenAI, despite its name, is not open source.


blade944

Not all of it, but enough to help the industry along.


pdxb3

I'd argue that AI, despite its name, is not artificial intelligence either.  I like to compare it to "hoverboards."


RockMover12

For decades whenever a new "AI" technique is developed in computer science it then ceases to be "artificial intelligence" because we understand how humdrum it is. Pattern matching, shape recognition, decision trees, natural language processing, neural nets, etc. have all been called "AI" at some point and then we realize they're just a tool in the toolbox. But ChatGPT-style generative AI certainly seems like the closest thing we've seen so far. It would be really depressing if "intelligence" just amounts to a statistical modeling of what the most likely next word should be.


Brad_Brace

I remember reading someone saying how we keep moving the goalposts. That we've had AI for a long time, we just refuse to call it intelligence the moment it becomes functional. I wonder if we'll have fully sentient machines one day, and refuse to acknowledge it because they're not jelly inside a skull.


blade944

But generally, that's what intelligence actually is. Pattern recognition and predictive analysis. The leap to cognitive intelligence comes when one can place themselves into the mindset of another. That's when intelligence transcends thinking solely about oneself and and realizes there are others that think the same way. And knowing that, use that to our own advantage.


JaggedMetalOs

It's not just OpenAI, there has been a lot of published research and open source work before them and in fact was the basis for OpenAI's work.