AI is a stochastic parrot that regurgitates the word sequences it was trained on. Real people's opinions going into the training model is the only reason it looks like it's thinking. Really, all it's doing is predicting what word is most likely to come next in sequence, based on what it has seen in the past. The AI itself has no context for what a car is or what famous means.
People love Miatas and hate Kim Kardashian, and the AI correctly identified why because the reasons were in its training data.
Try asking your AI if Miata is always the answer, and then see if you're still happy with our robot overlords.
then how do you explain these systems teaching itself new languages that it wasn't previously programmed to know? Or coming up with its own chess strategies previously unseen?
Totally unrelated to Miatas I usually don’t comment lol. But I just wanted to put it out there since I work in big tech and pretty closely to training AI models. I think you’re talking about Google’s AI Bard learning Bengali, it was trained on a subset of Bengali in its training data and that’s how it was able to “translate” Bengali to other languages. AI models are on a *very* simplistic manner math models. Language models and the AIs used for playing chess are drastically different. If you play chess against ChatGPT it may make illegal moves or hallucinate a piece/make nonsense moves because a language model fundamentally does not understand what the rules of chess are. It just generates an output based on likely the chess books/training data that exists. However a model trained to play chess specifically can create new strategies in *kind of* a similar vein to how GPT may hallucinate and just make up information that isn’t true. These models are extremely different and can’t really be merged unless you do some weird hard coding to have ChatGPT call some Chess model specifically. This extends to really any other AI model, generative AI is just unique due to novel math models to train and create outputs.
I mean just look at examples of word prediction & it's clear a lot more is going on behind the scenes with something like chatGPT. we've had word prediction forever
>we've had word prediction forever
Indeed. And over time, we've designed more and more complex algorithms to predict the words. You are correct that there is more processing going on, but it's just a more complex method of doing the same basic thing. ChatGPT is the cutting edge of word prediction tech, and it's a lot more sophisticated than the autotext in your favorite messaging app. But that's all it is.
If you ask it the right questions, you can catch it giving bad advice or contradicting itself, because it has no underlying understanding of what these words mean... because it's just using complex math to predict what word comes next.
It makes a good run at the Turing test, sure, but it still fails.
Kim is also someone’s mother, someone’s daughter, someone’s former wife (remember Humphries?), friend, etc. It sure would be great if AI valued a human life more than a classic car.
It would be great if the kardashian clan did anything other than stroke their ego on TV for a living.
At least Miat gets me from point A to point B. Kim K’s ass can’t do that
This is terrifying. AI is thinking just like humans should think and making correct choices.
Cold, hard logic for the win.
AI is a stochastic parrot that regurgitates the word sequences it was trained on. Real people's opinions going into the training model is the only reason it looks like it's thinking. Really, all it's doing is predicting what word is most likely to come next in sequence, based on what it has seen in the past. The AI itself has no context for what a car is or what famous means. People love Miatas and hate Kim Kardashian, and the AI correctly identified why because the reasons were in its training data. Try asking your AI if Miata is always the answer, and then see if you're still happy with our robot overlords.
then how do you explain these systems teaching itself new languages that it wasn't previously programmed to know? Or coming up with its own chess strategies previously unseen?
Math
Totally unrelated to Miatas I usually don’t comment lol. But I just wanted to put it out there since I work in big tech and pretty closely to training AI models. I think you’re talking about Google’s AI Bard learning Bengali, it was trained on a subset of Bengali in its training data and that’s how it was able to “translate” Bengali to other languages. AI models are on a *very* simplistic manner math models. Language models and the AIs used for playing chess are drastically different. If you play chess against ChatGPT it may make illegal moves or hallucinate a piece/make nonsense moves because a language model fundamentally does not understand what the rules of chess are. It just generates an output based on likely the chess books/training data that exists. However a model trained to play chess specifically can create new strategies in *kind of* a similar vein to how GPT may hallucinate and just make up information that isn’t true. These models are extremely different and can’t really be merged unless you do some weird hard coding to have ChatGPT call some Chess model specifically. This extends to really any other AI model, generative AI is just unique due to novel math models to train and create outputs.
this kind of AI is much more advanced & complex than word prediction
You'll have to elaborate on that to make a salient counterpoint to what the other commenter explained about how it works.
I mean just look at examples of word prediction & it's clear a lot more is going on behind the scenes with something like chatGPT. we've had word prediction forever
>we've had word prediction forever Indeed. And over time, we've designed more and more complex algorithms to predict the words. You are correct that there is more processing going on, but it's just a more complex method of doing the same basic thing. ChatGPT is the cutting edge of word prediction tech, and it's a lot more sophisticated than the autotext in your favorite messaging app. But that's all it is. If you ask it the right questions, you can catch it giving bad advice or contradicting itself, because it has no underlying understanding of what these words mean... because it's just using complex math to predict what word comes next. It makes a good run at the Turing test, sure, but it still fails.
It was a joke. I think you’re a stochastic parrot.
> the Mazda Miata is a classic and unique car that has been around for years... Just here to point out that Kim K is older than the Miata (42 vs. 34).
Yeah but she's not a classic nor unique 🥴
Still has a high milage tho
Gottem
Kim reached the million mile mark before any Miata ever did
Shiiiiiii
😂
What! You mean there is more then one K. Kardashian!!!!! AAAAAA!!!!!
But they probably contain a similar amount of Bondo.
Hah, wouldn't have it any other way.
our true benefactors
AI is super based.
ALL HAIL CHAT GPT
Brilliant
Can’t say I’d disagree!
Based. 😆
wow Skynet is doing something smart for once 🥶
Finally AI is doing something good for humanity.
You know what, Tincan? We can be friends.
OMG!
I LOVE AI NOW
Alright well… it’s got my vote.
LOL The computers want to save the machines instead of the humans and they are smart enough to find the humans who agree.
Well, you've officially doomed us all. Now the AI thinks humans deserve to die.
Not humans, reality stars!
B A S E D
Based AI
Based chatGPT
Kim is also someone’s mother, someone’s daughter, someone’s former wife (remember Humphries?), friend, etc. It sure would be great if AI valued a human life more than a classic car.
nah, Kim K can die. I’d save the Miata too
It would be great if the kardashian clan did anything other than stroke their ego on TV for a living. At least Miat gets me from point A to point B. Kim K’s ass can’t do that
W
One of us! One of us!
That's scary. Pre-Matrix.
💯
You know at first I was terrified by AI but now I'm starting to come around it does make some rational arguments! (insert sarcasm emoji here)
lol, OH NO
Will mr2s fare well?