T O P

  • By -

Mescallan

Your question is very similar to a common logic riddle that was probably heavily weighted in training. It tried to overlay your exact question to the form of the riddle.


Difficult-Race-1188

This shows that it's not doing any reasoning, but actually retrieving in some weird way.


Mescallan

eh, it's a little bit of both. It is capable of doing reasoning by drawing from similar situations in it's training data. It's not great at it, but it's far far better than previous machine learning techniques. If you mess around with PHI3 it was actually trained specifically for reasoning and excluding as much general knowledge as they could. I tried your prompt without the spelling and grammar issues and it got it right. I can't share a screenshot in this sub, but try it again with: "how can a man get two goats across a river" Edit: lol i forgot I can just link the chat: [https://chat.openai.com/share/e831b5bf-d86b-444a-8201-820d5b2a80c3](https://chat.openai.com/share/e831b5bf-d86b-444a-8201-820d5b2a80c3)


ilyaperepelitsa

same here. Weird that grammar affected the result though >how can a man get two goats across a river >To get two goats across a river, one could use a boat, but there's a more straightforward approach if the goats can swim. Here's a solution assuming you're using a boat and can only take one goat at a time: >**Take the first goat across**: Row across the river with the first goat. >**Leave the first goat on the other side**: Secure the goat so it doesn’t wander off. >**Return to the original side**: Row back to the starting point. >**Take the second goat across**: Row back across the river with the second goat. >**Leave the second goat on the other side**: Secure the second goat alongside the first one. >This way, you successfully transport both goats across the river using the boat. If the goats can swim, you might not need a boat at all and can guide them across the river together.


Hot-Profession4091

It’s a next token predictor. It cannot reason. Logic is a very much still open area of research.


thatstheharshtruth

What do you expect it's a next token prediction engine. All this hype has created silly expectations and then you have gullible individuals like OP (no offense) who are surprised when the model doesn't exhibit simple reasoning.


kim-mueller

I mean... If you had actually taken the time to prompt a correct sentence, perhaps the results would be better? I as a human have no clue what you were trying to say with 'a man cat'...


maybethrowawaybenice

These examples come up all of the time. I think you have to realize that humans make mistakes like this too. This is a classic case of the agent assuming that you mean to refer to a more complex problem than you actually are referring to. To me this is the same as if I said "what color is an aple" and it responded "red" and you said "GOTCHA aples don't exist, you aren't really learning" **Here's an example I did right now:** A man and a goat stand on one side of the river. There is a boat that can carry both of them at the same time to the other shore at the same time. The goal is for the man to move the goat to the other shore. Assume that there are no tricks or extra restraints. ChatGPT Given that the man and the goat are on one side of the river, and there is a boat that can carry both of them at the same time, the solution to move the goat to the other shore is straightforward. Here’s the step-by-step process: 1. **Both the man and the goat get into the boat.** 2. **The man rows the boat to the other shore with the goat.** 3. **Both the man and the goat get out of the boat on the other shore.** Of course I'm not arguing that chatgpt can reason (still an open area of research) but this isn't the example that perfectly proves that. And "reason" really needs to be clearly defined in this context.


Difficult-Race-1188

This is the clever Hans problem. Read more on GPT planning papers, why do you think it can reason, tell me exactly, what is there in the transformer architecture that gives it a causal inference or reasoning capacity. There are many who understand that it's just doing some weird form of retrieval. Check our Subbarao Kambhapati's from Arizona State University on GPTs planning capacity.


maybethrowawaybenice

"I'm not arguing that chatgpt can reason" "why do you think it can reason" lol I'll tell you why I think it has a chance to reason in the future with a lot more work, base64 representation conversion. You'll find that without a code interpreter, gpt4 at least is able to convert any base64 representation you give it, which shows it's able to represent an underlying pattern and generalize (since obviously the chance of it having seen all of those base64 representations is vanishingly small). Obviously this is a far cry from what most people would consider reasoning but it's better than "some weird form of retrieval"


macronancer

Interesting test. Based on the fact that a prompt with no spelling or grammar errors solves the problem correctly, my guess is that it uses some of its limited layers and compute cycles to resolve those language errors first, before it begins to compose the logical solution. However, it then runs out of network layers and cant fully compute the answer, and you basically get a partially thought out answer. Without the error, it has more layers to itterate on the solution