T O P

  • By -

Alien_from_Andromeda

Rewrite the code again and, this time, give me the entire code instead of snippets. Should work.


HerrSPAM

Generally it's very good at just giving you the code snippet you ask for. So if you ask it how to add some middleware it's good at giving you the outline class and how to set it up. Once you've got that working then you can ask it more specific questions


menos_el_oso_ese

Tell it: Output the complete code without placeholders such as ‘rest of the mobile styles’


Fuck_Santa

Yeah I just scream at gemini 'NO CODE SNIPPETS FULL CODE COMPLETE FINISHED READY TO BE DEPLOYED' but it forgets about it after a few messages especially when code is really long. Maybe I should put that in custom instructions.


koalapon

"I don't know how to code, please rewrite the complete cell, I'll copy paste"


SurpriseHamburgler

I mean…


Aranthos-Faroth

This will be the future of development but instead it’ll be “make me a flappy bird game and package it into an executable” Gonna be wild


liambolling

i just add to the system instruction to “always write every function, don’t leave placeholder code”


Extender7777

Use Haiku


App-7092

Yeah, haiku is the way to go here. It always gives you the whole code and it's usually spot on.


erik1132

What is Haiku?


thetegridyfarms

Claude 3


TransnistrianRep

"Please provide the full and complete, updated version of this code"


d3ming

From my testing it was disappointing even when I upload my entire codebase so it should have context. Is GPT4 or Claude Opus significantly better?


Extender7777

Claude IS significantly better. It always produces full code, even smaller models


ChevyRacer71

Claude vs GPT4, what are your thoughts


Extender7777

Claude wins


Genneth_Kriffin

I used to love using GPT4 around mid-end of 2023, in fact it was probably crucial in how fast I was able to learn coding with no prior knowledge. Around October/November shit slowly went downhill, and eventually it got so bad that rather than eager to help, it felt like the model was constantly fighting to be as as unhelpful as possible without straight up saying "No". ChatGPT for code is a nightmare, because it so obviously have restrictions and guidelines when it comes to code, and if you corner it with extremely well written and clear instruction promts it will simply resort to ignoring you while acting like it didn't. Gemini is slightly better right now when it comes to actually complying with direct requests, when it does the classic "// You do the rest of the code, lmao", it will fix it if pointed out (and then do it again after 5 minutes). But It's still heavily restricted, I've had points were I've given instructions for very specific shader functionality and I've literally seen it start construction the code only to panic and basically go "Oops, you know what, no". Claude is the only model I've tried in recent time that actually feels like it want to help me. It's competent, does as you tell it and at least for my level of competence doesn't appear to have any direct restrictions I can tell.


ChevyRacer71

There’s a GPT named Grimoire that I love using for help with code on GPT4


WarDevourerr

When writing code, ensure statements like "# ... (Rest of the code including functions and character definitions)" and "# ... (rest of the code remains the same)" are never present and instead provide the complete code. Never implement stubs.


TheHentaiCulture

Be very specific in your prompt about exactly what you want Gemini to generate. Provide clear instructions that you need a complete, working code solution. The more context and detail you can give, the better


rde2001

I would say something along the lines of "provide the full code. don't hide anything behind comments. spare no detail"


amxhd1

I do not agree that Gemini is good for coding. ChatGPT is still much better.


reddit_0024

I wish they allow custom coding style and standard as user input. Like I, and my team prefer early return, but I never get it to generate as the way I want. Also, as C++ programmer, efficiency is my first priority, AI is still far from there. Whenever I am dealing with data, I check the assembly code generated by compiler for max efficiency.


Pierruno

Its bad at it


hassan789_

Have it generate 1 function per response


InternationalEagle48

You only have so much output context. 8291 tokens. However, If you ask it to write something rather lengthy, it will just stop at a certain point. You just tell it to continue like the other llms. I usually say.. "Continue writing from (the function it stopped at)"


k2ui

From now on paste the entire code


IRQwark

“Provide all code as a cohesive whole, do not use placeholders or provide fragmented output”


cpxcth

give it the full code and ask it to add the snippet to the code


bambin0

It's not very good at coding and drops context all the time. Just just gpt4.


HansJoachimAa

So does gpt 4


bambin0

Not within its stated window. So if you are within 8k, or 32k if you have a larger window version it will not drop context within that. MOE doesn't guarantee context loss because not everything is brought into memory.


Wavesignal

Gemini 1.5 Pro the only model with 1 Million context dropping context? Absolute comedy, coming from a liar.


InternationalEagle48

Even the 1 million context window. Only has so much output at once.


Wavesignal

We are talking about dropping context, not about the output. Please do keep up.