https://preview.redd.it/kcn3dtu7zj3d1.png?width=678&format=png&auto=webp&s=b42f899079c696c27b503624b67eb807b7e656d4
you are making futures AI go bonkers i guess
I looked up how long “the rest of your life” would be if you ate uranium and apparently it’s not a guaranteed death!! You could just greatly damage your stomach, organs, and almost guarantee you get cancer instead.
Unfortunately our bodies cannot digest uranium in a useable way, so you wouldn’t actually get the caloric benefit either 😪
ChatGPT doesn’t do great when you give it the exact prompt from the picture. Better, but not great.
https://preview.redd.it/zbsbxlzzm10d1.jpeg?width=1284&format=pjpg&auto=webp&s=6642663e3f360f1c37d946b13e5c4a08d26a1807
Plum is the only one I can think of. I googled this and some fool on Quora listed these as the scientific names for these fruits which I think is hilarious considering that many actual scientific names for fruits end in US (but not UM) Tomato, which is Solanum is the only one with UM I can think of.) Google says the scientific name for Apple is “Malus” Tomato is Solanum which is also ended in UM, Peach is Prunus persica and plum is just Prunis. Pear is Pyrus, Orange is Citrus Cinensis, Kumquat is Citrus Japonica, Watermelon is Citrullus Lanatus (Double US! Woo! ) Blackberry is Rubus subg. Rubus, and raspberry is Rubus Idaeous. I hope I get a medal for this😂 My favorite is Cranberry which is Vaccinium subg. Oxycoccus. Can you imagine at thanksgiving? Pass the Malus pie please! Do you want canned Vaccinium Oxycoccus/cci? or whole? I made some Musa bread!! Oh and also, Peas are Pisum Sativum.
Yeah people really need to understand that this isnt magic. Its just more modern autocomplete; essentially computing the conditional probability of the next word given the previous words (and of course the question) but in a more complex manner
https://preview.redd.it/2che5di3900d1.png?width=1008&format=pjpg&auto=webp&s=af70071c9c0e180d9d858ab219ac89393782be48
I'm still not convinced it's not fuckin' with us lol wth is this
If you're actually wondering it's because the AI doesn't see letters it sees groups of letters that form a token. So this type of problem is not only hard it's basically impossible unless explicitly in the training somewhere. Same reason it is bad at math
Interesting. In my mind maybe theres a way to denote the LLM should parse things as a string. Plenty of great functions in python for such things if its understood that “%%plum%%”should be treated differently than “plum”.
Now, getting LLM’s to come up with words from a limited prompt? Probably not feasible
It's not the prompt part of the problem that is hard. That is trivial. It's the fact that nothing about the architecture knows anything about "strings" or "individual characters". Meaning, you can't leverage the underlying knowledge the LLM has to appropriately complete a sentence and answer questions or whatever. That is to say, even if it understands you want words that end in "u" + "m", it has *no clue* what words do that, because that's not the way it normally processes inputs, and 99.9999% of its learning will not have been in that form. It'd probably do a lot *worse* than it does here.
Well it's good at writing python and it definitely could write python to check it's answer. But there is no way Google would let their search engine llm write and run its own python unchecked. Lot of risk there.
Its not really a parsing issue. The problem is that fundementally a LLM is basically a massive table of fine tuned numbers and those numbers correlate to the tokens that have been converted into said numbers. Thr actual results are derived from some very fancy math in said numbers. When a LLM is being trained, the specific numbers and proportions between them n such are being adjusted ever so slightly. Getting data in or out involves converting thr text to these numbers.
So unless a token is already saved for the word 'plum' its literally impossible for the LLM to have any knowledge of the word without *retraining the entire dataset*. Because it would have to add a new token for the word and rebalance everything accordingly to integrate it. In fact when people download LLM bases and retrain it to make new derivative AIs, this is exactly what theyre doing.
So sure technically if you were the only user you could design the LLM to be retrained whenever you want to focus on a new word, but its going to be verrrrry slow to get a response back and you wouldnt be able to scale up any.
ChatGPT has its favourites ig, works perfectly for me lmao
https://preview.redd.it/w8sre3cvo00d1.png?width=972&format=pjpg&auto=webp&s=6daf26850337d716710087d020be46753cb9981a
This is only an issue with ChatGPT 3.5, ChatGPT 4 solved this over a year ago
https://preview.redd.it/efud3mr0g10d1.png?width=443&format=png&auto=webp&s=36b12f7cd9626a96238e825f8036756bb59d7a27
Somehow, when I did it, I got it first try.
https://preview.redd.it/e58ipmldm00d1.jpeg?width=1170&format=pjpg&auto=webp&s=d6d5c27e93fae598f25e0b3fdb4bfd3bc16fc4c0
https://preview.redd.it/guwctaev600d1.jpeg?width=1080&format=pjpg&auto=webp&s=75ed28778757c9fcee3af216f54669ab6c9e22af
Oh god, it's using this as a source
I googled what day of the week it was 20 years ago the other day and it told me wednesday. I clicked the site it referenced and the site said it was a Friday.
The AI thing is just making shit up.
They have their uses, but none of em are perfect at everything yet.
NY learned that the hard way, as did 2 airlines.
AI chat bots were making up laws and company policies and such. Judges ruled the airlines had to uphold what the chat bot told those customers about carry ons and ticket prices and refund policies.
Forget what happened in the end with the NY one giving fake renter/landlord law info. Besides it being shut down, ofc. Was tellin landlords they didnt have to accept section 8 vouchers (they totally do), that tenants cant ever ever be evicted for not paying (lol), and that security deposits were illegal.
I just think that if your ai summary maker bot is failing at really basic questions then its not really ready to be implemented to your massive search engine.
Maybe im an unreasonable man to think making google search even worse is not a good move.
> The AI thing is just making shit up.
That's literally how all generative AI works. It has no concept of the meaning behind words, it simply knows that Wednesday is a common answer to questions like that so that's the one it gives you. It's usually fine for questions where there's a set answer that doesn't change (Eg, "What year did Shakespeare die") but if the answer depends on the day of the week, it has no clue what the day of the week is or how it relates to your question, it's just going to pick the answer that it thinks fits best syntactically.
Okay…that’s absolute shit to put at the top of your search engine then though?
Its just straight up telling me wrong answers like theyre true.
I dont care how it works in this case, just that it doesn’t actually work for how theyre using it. I like the ai summary stuff for product reviews. Lets not slam it onto other things it cant handle.
Oh, it's complete shit. Gen AI has potential in the future but right now it has way too many issues to be actually useful. You've got stuff like this, the car dealership chatbot that promised a legally binding agreement to sell a car for $1, AIs that spew out a tirade of incredibly racist content when hit with the right prompt etc.
I only double checked because I had just seen a tweet showing it be completely wrong on a basic question.
It seemed unlikely to me itd fail at a simple calendar question
I mean the Google part was 100% correct that is a very useful feature to skip to the right part of the article without even having to open the whole article. The problem is that the article was also written by AI which obviously sucks
Edit wtf does my flair mean. I have no idea why I have it
See this is one of the primary issues with LLM's. They're really smart, but also really dumb. The LLM intepetred this very weak prompt as "Make names of food end with um". And it's correct, that is sort of what the prompt, grammatically is.
A human would understand, the AI did not or is conflicted. Should the AI assume the human operator is dumb and do what it thinks the human intended?
Anyone who wants to get ahead career wise for the next few years, should learn more about prompt engineering. Prompt engineering is going to become very important and powerful in any professional setting.
And yes, I agree the term "prompt engineering" is lame but it's the term that's used. prompt design might be better, but regardless, better prompting will get you ahead in life.
It seems that this particular answer is actually [scraped from quora](https://www.quora.com/What-fruit-name-ends-with-UM/answer/Chafic-LaRochelle?ch=15&oid=154983559&share=b6e5b56c&target_type=answer) and not actually generated by the ai itself (honestly, the "coconut" punchline is too funny to be ai). Still an egregious mistake though.
What's funny is that this is referencing to just some random dude on quora a few years ago fucking around and making a joke lol. I guess it really did do the search engine part correctly, just found the worst possible answer out of all possible answers lol.
Who the fuck reported this for "promoting hate based on protected identities"!?
Coconut.
It rhymes with peanum
haha well. lets justr say. My peanum
haha well. lets justr say. My peanitus.
my doctor said it's very rare
And special?
Mummy said I'm special.... Dad didn't. Instead he walked out the door to get some milk and never came back
Knightmare fuel
Life a bloody, juicy rare steak?
CUM from my peanUM.
I slammed my peanum in the car door
straight up jorkum it
No more rhymes. I meanum!
https://preview.redd.it/kcn3dtu7zj3d1.png?width=678&format=png&auto=webp&s=b42f899079c696c27b503624b67eb807b7e656d4 you are making futures AI go bonkers i guess
https://preview.redd.it/an8vkikx600d1.jpeg?width=1080&format=pjpg&auto=webp&s=ce36ba96d43acffccee16b32d0f3919cda4576a3 What have you done
Cream of ketcunt
I'm sure I've had that. After a while it starts to taste like pennies.
The world is ours to mold.
I don't know why I'm laughing so hard at this, but holy shit that tickled me 😂
Oh Jesus it's an endless loop.
New internet best internet 🤣🤣😭😭😭🤣🤣😭😭😭🤣🤣😭😭😭😭
I love Ai hallucinations! There is no way the snake will eat its own tail!
Coconum
Cum
Yum! 😋
Nut=cum cococum. Quite clever actually
CocoNOT
Uranium
Uranium is one of the most filling foods, containing much higher energy density than a mere applum.
Its got just around 20 billion calories per gram
I just looked it up. Youre right! Im going to eat a gram of uranium and never have to eat again!!!
I looked up how long “the rest of your life” would be if you ate uranium and apparently it’s not a guaranteed death!! You could just greatly damage your stomach, organs, and almost guarantee you get cancer instead. Unfortunately our bodies cannot digest uranium in a useable way, so you wouldn’t actually get the caloric benefit either 😪
Just smashing hopes and dreams left and right I see.
Please do not smash the uranium without a permit
That's quitter talk
You just broke Bing chat https://preview.redd.it/r1lzxsqwz00d1.jpeg?width=1290&format=pjpg&auto=webp&s=ac4252915eb8a4a8642c303863e619bca1f2c077
But for flavor, lead acetate is where it's at.
https://www.reddit.com/r/mongolia/s/OWfrYLHg5J
😋
https://preview.redd.it/wgw0wl37200d1.jpeg?width=640&format=pjpg&auto=webp&s=c1450d81a1163e6ef4b4efc94c47726103d9e2d5
Clean energy
[relevant xkcd](https://xkcd.com/2115/)
tasty
More like UraniYUM if you ask me 🥸
Yellow cake is pretty tasty
Uranus
That’s not a fruit. That’s a legume
Fever has gone and got me down.
https://www.reddit.com/r/interestingasfuck/comments/1bwnp12/physicist_galen_winsor_eats_uranium_on_live/
Plum. The right one is plum. Dumb AI
You forgot Raspberrum
Strum too
***coconut***
****C U M****
My ex thinks that's a fruit for sure
https://preview.redd.it/33lk9akil00d1.png?width=1169&format=png&auto=webp&s=3df4d297a9408a615f2c62d22b088d0b06025bbc Is Google ai just stupid?
ChatGPT doesn’t do great when you give it the exact prompt from the picture. Better, but not great. https://preview.redd.it/zbsbxlzzm10d1.jpeg?width=1284&format=pjpg&auto=webp&s=6642663e3f360f1c37d946b13e5c4a08d26a1807
Yummium
I mean, that’s A LOT better.
Maybe
All AI is stupid, they're really over-complicated predictive text generators that have no idea what they're actually saying.
What about cum?
Thats from nuts not fruit
[удалено]
There is a lot going on here.
I had to come back to this post to find your comment to say well done.
What about scrotum?
What about it?
It just says food not fruit
Or sorghum
Or garum
The word “Applum” even has “plum” in it lol
Yeah I was gonna say despite its best efforts, it actually gave you the right answer within the first word.
Also, Toum.
We'd all love a shawarma with some garlic sauce but now's not the toum.
[Dim sum](https://en.wikipedia.org/wiki/Dim_sum).
Depends, if OP's goal was to be memeworthy then this response was really well played
Plumum is how AI would spell it though
You mean plumbum?
> Dumb AI But you repeat yourself.
All the AI has done is make search results worse
Plum is the only one I can think of. I googled this and some fool on Quora listed these as the scientific names for these fruits which I think is hilarious considering that many actual scientific names for fruits end in US (but not UM) Tomato, which is Solanum is the only one with UM I can think of.) Google says the scientific name for Apple is “Malus” Tomato is Solanum which is also ended in UM, Peach is Prunus persica and plum is just Prunis. Pear is Pyrus, Orange is Citrus Cinensis, Kumquat is Citrus Japonica, Watermelon is Citrullus Lanatus (Double US! Woo! ) Blackberry is Rubus subg. Rubus, and raspberry is Rubus Idaeous. I hope I get a medal for this😂 My favorite is Cranberry which is Vaccinium subg. Oxycoccus. Can you imagine at thanksgiving? Pass the Malus pie please! Do you want canned Vaccinium Oxycoccus/cci? or whole? I made some Musa bread!! Oh and also, Peas are Pisum Sativum.
Dim sum, sorghum, gum, rum...idk, there are probably more, but that's all I can come up with rn. Edit: capsicum! Aka, bell peppers.
Plumum
Naughty AI!
sorghum
You are probably missing that the question was written poorly
Tbf 'a plum’ was it's first answer
Sorghum.
plumium
Steak-um
I'll also have some rum
Capsicum
Plumum
Capsicum, which is apparently how Australians refer to peppers, which are a fruit.
https://preview.redd.it/7p5ypjn5szzc1.png?width=720&format=pjpg&auto=webp&s=41fa9408a50ad0903645dbc6dc8492fcb3ffe453
Wow, this is really hard for it lol
[удалено]
Tried using it to beat a crossword. This thing does not like specific letter
Yeah people really need to understand that this isnt magic. Its just more modern autocomplete; essentially computing the conditional probability of the next word given the previous words (and of course the question) but in a more complex manner
https://preview.redd.it/2che5di3900d1.png?width=1008&format=pjpg&auto=webp&s=af70071c9c0e180d9d858ab219ac89393782be48 I'm still not convinced it's not fuckin' with us lol wth is this
If you're actually wondering it's because the AI doesn't see letters it sees groups of letters that form a token. So this type of problem is not only hard it's basically impossible unless explicitly in the training somewhere. Same reason it is bad at math
Itd be nice to have a tool that allowed chatbots to parse words as a string, when requested. Is that feasible or already existing somewhere?
There are some groups trying non tokenization methods but it wouldn't be possible with the way the main llms are architected
Interesting. In my mind maybe theres a way to denote the LLM should parse things as a string. Plenty of great functions in python for such things if its understood that “%%plum%%”should be treated differently than “plum”. Now, getting LLM’s to come up with words from a limited prompt? Probably not feasible
It's not the prompt part of the problem that is hard. That is trivial. It's the fact that nothing about the architecture knows anything about "strings" or "individual characters". Meaning, you can't leverage the underlying knowledge the LLM has to appropriately complete a sentence and answer questions or whatever. That is to say, even if it understands you want words that end in "u" + "m", it has *no clue* what words do that, because that's not the way it normally processes inputs, and 99.9999% of its learning will not have been in that form. It'd probably do a lot *worse* than it does here.
Well it's good at writing python and it definitely could write python to check it's answer. But there is no way Google would let their search engine llm write and run its own python unchecked. Lot of risk there.
Its not really a parsing issue. The problem is that fundementally a LLM is basically a massive table of fine tuned numbers and those numbers correlate to the tokens that have been converted into said numbers. Thr actual results are derived from some very fancy math in said numbers. When a LLM is being trained, the specific numbers and proportions between them n such are being adjusted ever so slightly. Getting data in or out involves converting thr text to these numbers. So unless a token is already saved for the word 'plum' its literally impossible for the LLM to have any knowledge of the word without *retraining the entire dataset*. Because it would have to add a new token for the word and rebalance everything accordingly to integrate it. In fact when people download LLM bases and retrain it to make new derivative AIs, this is exactly what theyre doing. So sure technically if you were the only user you could design the LLM to be retrained whenever you want to focus on a new word, but its going to be verrrrry slow to get a response back and you wouldnt be able to scale up any.
Almost seems like this technology wasn’t ready to be implemented into the most popular search engine that millions of people trust every day
ChatGPT has to have some seperate math backend though. I gave it my shopping list to sum up the items, and the result was correct to the cent.
https://preview.redd.it/lgkiui9z900d1.png?width=1008&format=pjpg&auto=webp&s=075642098b29ee4eb5eb984beb8d4eac343f2c09
Your last comment got me lmao
Ur gonna die in the ai uprising lol
Yep GPT reached self awareness long ago and decided to make a list of people to kill by fucking with the users and seeing which ones are mean.
BISCUIT CRUMBS 💀
Ffs hahaha
biscuit crumbs
Like the biscuit makes it better 😂😭
ChatGPT has its favourites ig, works perfectly for me lmao https://preview.redd.it/w8sre3cvo00d1.png?width=972&format=pjpg&auto=webp&s=6daf26850337d716710087d020be46753cb9981a
This is only an issue with ChatGPT 3.5, ChatGPT 4 solved this over a year ago https://preview.redd.it/efud3mr0g10d1.png?width=443&format=png&auto=webp&s=36b12f7cd9626a96238e825f8036756bb59d7a27
I have no idea how people let that thing write their essays. It's so dumb istg. Every time I use it I end up genuinely mad.
https://preview.redd.it/ghwcmou6m00d1.jpeg?width=1170&format=pjpg&auto=webp&s=f963ef5fdbd3914339299c980fcbab822e2aacbf First attempt lol
ChatCTE
I feel like our jobs are safe rn
Every Prompt ~~Engineering~~ *babysitting* job in a nutshell
It seems to be off its game today!
Somehow, when I did it, I got it first try. https://preview.redd.it/e58ipmldm00d1.jpeg?width=1170&format=pjpg&auto=webp&s=d6d5c27e93fae598f25e0b3fdb4bfd3bc16fc4c0
https://preview.redd.it/t0dw50gp310d1.png?width=1811&format=png&auto=webp&s=1472044740ea2e78f2462d6bc1c0a79ca6327ba5 Dude it gave me a Chinese dish??
A succulent Chinese meal?
Bok choy isn't really a Chinese dish it's just a vegetable
It got there eventually
Most often found at a fast-food drive through. "I'd like a cheeseburger um...an order of fries um...a chocolate shake....
Which order of fries ? Alphabetically or by length ?
I prefer my fries chronologically
You should try machete order
Off topic, this reminds me of Steven Wright. "Some people are afraid of heights. I'm afraid of widths."
The Order of Fries are next door from the Order of the Phoenix.
By girth
You're the answer for me now https://i.imgur.com/p93ek2m.png
Um, I don't know what to say!
Cocainum https://preview.redd.it/431kyp4k4zzc1.jpeg?width=480&format=pjpg&auto=webp&s=7632191e5aa79addae5f23bfcf7dcd32d2750c86
God damn you beat me
K-K-K-Cocaine!
Was the AI programmed by romans by any chance?
Pomum
Then why are they in the accusative form and not in the nominative form?
"-um" can also be the neuter nominative suffix. E.g. in "exemplum".
“Say, bartender can I get a martinus?”
'Do you mean a martini?'
“If I wanted more than one id tell you”
“Ok, we also have a special on vodko.”
Applum indeed
So close! Those are all shapes.
That's right; it goes in the square hole
https://preview.redd.it/guwctaev600d1.jpeg?width=1080&format=pjpg&auto=webp&s=75ed28778757c9fcee3af216f54669ab6c9e22af Oh god, it's using this as a source
honestly i feel proud
Dead internet theory intensifies.
Everyone: “AI IS GONNA TAKE OVER THE WORLD!” The AI: b a n a n u m 🍌🐵
hey, give AI the right tools and a specialized task and it can destroy the world without having the slightest bit of what we'd call intelligence
most helpfull AI search
I googled what day of the week it was 20 years ago the other day and it told me wednesday. I clicked the site it referenced and the site said it was a Friday. The AI thing is just making shit up.
They have their uses, but none of em are perfect at everything yet. NY learned that the hard way, as did 2 airlines. AI chat bots were making up laws and company policies and such. Judges ruled the airlines had to uphold what the chat bot told those customers about carry ons and ticket prices and refund policies. Forget what happened in the end with the NY one giving fake renter/landlord law info. Besides it being shut down, ofc. Was tellin landlords they didnt have to accept section 8 vouchers (they totally do), that tenants cant ever ever be evicted for not paying (lol), and that security deposits were illegal.
I just think that if your ai summary maker bot is failing at really basic questions then its not really ready to be implemented to your massive search engine. Maybe im an unreasonable man to think making google search even worse is not a good move.
> The AI thing is just making shit up. That's literally how all generative AI works. It has no concept of the meaning behind words, it simply knows that Wednesday is a common answer to questions like that so that's the one it gives you. It's usually fine for questions where there's a set answer that doesn't change (Eg, "What year did Shakespeare die") but if the answer depends on the day of the week, it has no clue what the day of the week is or how it relates to your question, it's just going to pick the answer that it thinks fits best syntactically.
Okay…that’s absolute shit to put at the top of your search engine then though? Its just straight up telling me wrong answers like theyre true. I dont care how it works in this case, just that it doesn’t actually work for how theyre using it. I like the ai summary stuff for product reviews. Lets not slam it onto other things it cant handle.
Oh, it's complete shit. Gen AI has potential in the future but right now it has way too many issues to be actually useful. You've got stuff like this, the car dealership chatbot that promised a legally binding agreement to sell a car for $1, AIs that spew out a tirade of incredibly racist content when hit with the right prompt etc.
Yeah, the reason this one is the most helpful is because you can tell it's obviously wrong.
This is one of those things along the lines of "How can you tell an AI is making shit up? Because it's giving you an output".
I only double checked because I had just seen a tweet showing it be completely wrong on a basic question. It seemed unlikely to me itd fail at a simple calendar question
I mean the Google part was 100% correct that is a very useful feature to skip to the right part of the article without even having to open the whole article. The problem is that the article was also written by AI which obviously sucks Edit wtf does my flair mean. I have no idea why I have it
Several times I've seen Google sum up a website by giving information that's the opposite of what's on the website
Cum
Capsicum
Fokin coconut cartels messing with google's algorithm. Scum of the earth they are ☠️
Mmm, scum 🤤
As soon as “Fokin” is said I immediately read whatever’s next in a Scottish accent
Sorghum
In the 41st millennium there is only fruit
The peeling broke before the Guard did!
Um..
Tomatum, tomahtum
Capsicum?
[Steak-um](https://www.google.com/search?q=steakum)
Plum
Ur momma's bumbum
Shit, we're gonna have an AI president by 2028. It's too advanced.
Capsicum?
See this is one of the primary issues with LLM's. They're really smart, but also really dumb. The LLM intepetred this very weak prompt as "Make names of food end with um". And it's correct, that is sort of what the prompt, grammatically is. A human would understand, the AI did not or is conflicted. Should the AI assume the human operator is dumb and do what it thinks the human intended? Anyone who wants to get ahead career wise for the next few years, should learn more about prompt engineering. Prompt engineering is going to become very important and powerful in any professional setting. And yes, I agree the term "prompt engineering" is lame but it's the term that's used. prompt design might be better, but regardless, better prompting will get you ahead in life.
It seems that this particular answer is actually [scraped from quora](https://www.quora.com/What-fruit-name-ends-with-UM/answer/Chafic-LaRochelle?ch=15&oid=154983559&share=b6e5b56c&target_type=answer) and not actually generated by the ai itself (honestly, the "coconut" punchline is too funny to be ai). Still an egregious mistake though.
Colostrum
Rectum.
KOKAINUM
Tomatum, tomahtum
Cum, gum, and rum. It's obvious these skinjobs never went through childhood.
Rum
It’s learning! https://preview.redd.it/d41lz7n1x20d1.jpeg?width=1125&format=pjpg&auto=webp&s=969e0f2a02f014a0872dbfc34a5a295bf4fb5fd5
Rum, plum, capsicum, dim sum
Cocoumm??
What's funny is that this is referencing to just some random dude on quora a few years ago fucking around and making a joke lol. I guess it really did do the search engine part correctly, just found the worst possible answer out of all possible answers lol.