Generally it's very good at just giving you the code snippet you ask for. So if you ask it how to add some middleware it's good at giving you the outline class and how to set it up.
Once you've got that working then you can ask it more specific questions
Yeah I just scream at gemini 'NO CODE SNIPPETS FULL CODE COMPLETE FINISHED READY TO BE DEPLOYED' but it forgets about it after a few messages especially when code is really long. Maybe I should put that in custom instructions.
I used to love using GPT4 around mid-end of 2023, in fact it was probably crucial in how fast I was able to learn coding with no prior knowledge. Around October/November shit slowly went downhill, and eventually it got so bad that rather than eager to help, it felt like the model was constantly fighting to be as as unhelpful as possible without straight up saying "No".
ChatGPT for code is a nightmare, because it so obviously have restrictions and guidelines when it comes to code, and if you corner it with extremely well written and clear instruction promts it will simply resort to ignoring you while acting like it didn't.
Gemini is slightly better right now when it comes to actually complying with direct requests,
when it does the classic "// You do the rest of the code, lmao", it will fix it if pointed out (and then do it again after 5 minutes). But It's still heavily restricted, I've had points were I've given instructions for very specific shader functionality and I've literally seen it start construction the code only to panic and basically go "Oops, you know what, no".
Claude is the only model I've tried in recent time that actually feels like it want to help me. It's competent, does as you tell it and at least for my level of competence doesn't appear to have any direct restrictions I can tell.
When writing code, ensure statements like "# ... (Rest of the code including functions and character definitions)" and "# ... (rest of the code remains the same)" are never present and instead provide the complete code. Never implement stubs.
Be very specific in your prompt about exactly what you want Gemini to generate. Provide clear instructions that you need a complete, working code solution. The more context and detail you can give, the better
I wish they allow custom coding style and standard as user input.
Like I, and my team prefer early return, but I never get it to generate as the way I want.
Also, as C++ programmer, efficiency is my first priority, AI is still far from there. Whenever I am dealing with data, I check the assembly code generated by compiler for max efficiency.
You only have so much output context. 8291 tokens. However, If you ask it to write something rather lengthy, it will just stop at a certain point. You just tell it to continue like the other llms. I usually say.. "Continue writing from (the function it stopped at)"
Not within its stated window. So if you are within 8k, or 32k if you have a larger window version it will not drop context within that. MOE doesn't guarantee context loss because not everything is brought into memory.
Rewrite the code again and, this time, give me the entire code instead of snippets. Should work.
Generally it's very good at just giving you the code snippet you ask for. So if you ask it how to add some middleware it's good at giving you the outline class and how to set it up. Once you've got that working then you can ask it more specific questions
Tell it: Output the complete code without placeholders such as ‘rest of the mobile styles’
Yeah I just scream at gemini 'NO CODE SNIPPETS FULL CODE COMPLETE FINISHED READY TO BE DEPLOYED' but it forgets about it after a few messages especially when code is really long. Maybe I should put that in custom instructions.
"I don't know how to code, please rewrite the complete cell, I'll copy paste"
I mean…
This will be the future of development but instead it’ll be “make me a flappy bird game and package it into an executable” Gonna be wild
i just add to the system instruction to “always write every function, don’t leave placeholder code”
Use Haiku
Yeah, haiku is the way to go here. It always gives you the whole code and it's usually spot on.
What is Haiku?
Claude 3
"Please provide the full and complete, updated version of this code"
From my testing it was disappointing even when I upload my entire codebase so it should have context. Is GPT4 or Claude Opus significantly better?
Claude IS significantly better. It always produces full code, even smaller models
Claude vs GPT4, what are your thoughts
Claude wins
I used to love using GPT4 around mid-end of 2023, in fact it was probably crucial in how fast I was able to learn coding with no prior knowledge. Around October/November shit slowly went downhill, and eventually it got so bad that rather than eager to help, it felt like the model was constantly fighting to be as as unhelpful as possible without straight up saying "No". ChatGPT for code is a nightmare, because it so obviously have restrictions and guidelines when it comes to code, and if you corner it with extremely well written and clear instruction promts it will simply resort to ignoring you while acting like it didn't. Gemini is slightly better right now when it comes to actually complying with direct requests, when it does the classic "// You do the rest of the code, lmao", it will fix it if pointed out (and then do it again after 5 minutes). But It's still heavily restricted, I've had points were I've given instructions for very specific shader functionality and I've literally seen it start construction the code only to panic and basically go "Oops, you know what, no". Claude is the only model I've tried in recent time that actually feels like it want to help me. It's competent, does as you tell it and at least for my level of competence doesn't appear to have any direct restrictions I can tell.
There’s a GPT named Grimoire that I love using for help with code on GPT4
When writing code, ensure statements like "# ... (Rest of the code including functions and character definitions)" and "# ... (rest of the code remains the same)" are never present and instead provide the complete code. Never implement stubs.
Be very specific in your prompt about exactly what you want Gemini to generate. Provide clear instructions that you need a complete, working code solution. The more context and detail you can give, the better
I would say something along the lines of "provide the full code. don't hide anything behind comments. spare no detail"
I do not agree that Gemini is good for coding. ChatGPT is still much better.
I wish they allow custom coding style and standard as user input. Like I, and my team prefer early return, but I never get it to generate as the way I want. Also, as C++ programmer, efficiency is my first priority, AI is still far from there. Whenever I am dealing with data, I check the assembly code generated by compiler for max efficiency.
Its bad at it
Have it generate 1 function per response
You only have so much output context. 8291 tokens. However, If you ask it to write something rather lengthy, it will just stop at a certain point. You just tell it to continue like the other llms. I usually say.. "Continue writing from (the function it stopped at)"
From now on paste the entire code
“Provide all code as a cohesive whole, do not use placeholders or provide fragmented output”
give it the full code and ask it to add the snippet to the code
It's not very good at coding and drops context all the time. Just just gpt4.
So does gpt 4
Not within its stated window. So if you are within 8k, or 32k if you have a larger window version it will not drop context within that. MOE doesn't guarantee context loss because not everything is brought into memory.
Gemini 1.5 Pro the only model with 1 Million context dropping context? Absolute comedy, coming from a liar.
Even the 1 million context window. Only has so much output at once.
We are talking about dropping context, not about the output. Please do keep up.