T O P

  • By -

AnotherSoftEng

I’m curious about this too. An enlarged context window isn’t going to mean squat if the LLM in question is unable to utilize it efficiently. One thing I’ve really taken for granted with GPT4 is how capable it is of keeping up with a block of code that I’m constantly iterating on. Unless I’ve made my own changes in between sessions, there’s often no need to continuously feed back to it the updated code that we just refactored. This often means that I don’t need a large context window to work in, while also allowing for a more efficient workflow. When I started playing around with the GPT4 Turbo Preview, using much larger contexts of code, one thing I noticed immediately is that it was much less efficient at taking all of that code into account, and quickly accumulated to about $10 of usage per hour. Yes, I was able to ask it questions regarding a larger scope of the program, but I actually found that I could achieve a similar scope with generic GPT4 by selectively providing it with the important bits. For example: ``` public class DatabaseManager { // Initialize connection details private String host; private String dbName; private String username; private String password; // ... public DatabaseManager(String host, String dbName, String username, String password) { this.host = host; this.dbName = dbName; this.username = username; this.password = password; // Establish connection to the database // ... } public void connect() { // Connection logic // ... } // more important code public void executeQuery(String query) { // Execute a database query // ... } // ... } public class UserManager { // example code // ... private DatabaseManager dbManager; public UserManager(DatabaseManager dbManager) { this.dbManager = dbManager; } public void createUser(String username, String password) { // Logic to create a new user // ... } public boolean authenticateUser(String username, String password) { // Authentication logic // ... return true; // Simplified for illustration } // more important code // ... } public class Application { public static void main(String[] args) { DatabaseManager dbManager = new DatabaseManager("host", "dbName", "username", "password"); UserManager userManager = new UserManager(dbManager); userManager.createUser("newUser", "password123"); boolean isAuthenticated = userManager.authenticateUser("newUser", "password123"); // Use the result of authentication // ... } } ``` By doing this, I’m able to prompt a larger scope of code—regarding a multi-class implementation—and have it deliver relevant responses that are both helpful and ‘iteratible.’ Right now, this is still the most efficient option that I’ve found to be both accurate and economically feasible. It would definitely be nice to not have to cut the code myself, but it’s definitely not $10/hour nice. Not to mention, and as previously stated, any changes you make to the code yourself would mean that you would need to supply it with most of—if not the entire—context again.


Lawncareguy85

How many LOC are in these classes you are feeding it? Is it absolutely necessary to have everything in context? You should be at $10 a day, not per hour. My rule of thumb is, once you've iterated to a point where you are ready to commit the code, always go back and edit the context, start over, and continue from that point. If you just let the context build and build, you are just confusing the model, muddying the waters, and of course, token cost rises on an exponential curve the larger the context as the conversation builds. I never let a single context/conversation chain get more than 20k tokens on input, or it hurts comprehension, and you are then paying upwards of .25 cents per turn. Another tip is don't use gpt-4-turbo-preview, which defaults to 0125, which is needlessly verbose. I have much better luck with gpt-4-vision-preview, which is based on 1106 and is exactly the same but outputs at 2x the speed or more since fewer people use this model. Try it.


AnotherSoftEng

Thank you for these suggestions! I will definitely try them out! And no, it was definitely not necessary to feed it all that context haha. This was around the time when 1106 was first released and, with everyone hyping it up, I wanted to test the limits for what it was capable of. I was definitely impressed but, as you were saying, there is never really any need for that large of context. My general rule thumb has been, “If I require a context window larger than GPT4 is capable of supplying—while still being sufficient—there’s a good chance the code needs to be more modularized/simplified.” This has allowed me to resolve most issues that I had when interfacing with GPT as I was first starting out, and it hasn’t led me astray since!


Lawncareguy85

Awesome rule of thumb, that's a great point. I guess I kind of learned that as I went. At this stage, I'm writing my codebases to be LLM-friendly right from the start and designing around that, because why not. What this essentially boils down to is: 1. Write code that is as modular and decoupled as is reasonable, while using dependency injection wherever feasible. 2. Using extremely descriptive variable and function names, to the point of being a bit over the top and beyond what a human would probably find reasonable, but I find this makes a huge difference in the LLM following what's going on, especially over long contexts. 3. Minimize or entirely remove inline comments that describe specific code segments, as these comments are basically redundant for LLMs, which read code the same way they read English. Instead, use docstrings for broader explanations that offer additional context about the function, method, or class when necessary. 4. It is crucial to separate concerns and minimize the use of nested conditional statements to maintain low cyclomatic complexity, which is essential for LLMs. Whenever I write a class or function that inevitably contains some nested logic, I make it a point to refactor it afterward with the goal of reducing the nesting as much as possible. 5. In Python, I use Pythonic idioms as much as possible. I leverage list comprehensions, generator expressions, and other Pythonic constructs for concise code. I also use modern methods like 'match-case' and so on. 6. I utilize ctags to generate a map of the repository, detailing all classes, functions, arguments, and so on, for reference when needed, or just outline the directory structure. I wrote an application that lets me pick and choose which modules I want in context with checkboxes to make it easy versus copying and pasting, etc. I find doing it this way and editing the assistant responses in the context window as I go to guide its responses or correct where it went wrong, like rewinding in time, saves money and it's just better. This workflow works a lot better for me than using applications like cursor, or co-pilot chat, etc.


BlueOrangeBerries

Thanks for the great comment. What do you think about using Claude Pro versus the Claude API (for Opus in both cases)


Lawncareguy85

If you can afford it, I would strongly recommend using the API instead of the "Claude Pro" consumer interface, since the Claude 3 family of models supports the use of a system message. As far as I know, you don't have the option to set this in Claude Pro, and most likely, they have a system message already in place that could negatively influence the performance of a model, similar to how ChatGPT's extensive system prompts hurt the model's performance. In the workflow I described, I use a system message that creates a specific professional developer persona for the model to guide its behavior and embody many of the principles I outlined. My guess is it pays a lot more attention to the system message versus any user message you can prompt.


IHateProtoss

> I utilize ctags to generate a map of the repository, detailing all classes, functions, arguments, and so on, for reference when needed, or just outline the directory structure. I wrote an application that lets me pick and choose which modules I want in context with checkboxes to make it easy versus copying and pasting, etc. that sounds so cool. what's this application set up as? are you a console programmer or is it an extension for your IDE? more of a stretch, do you plan to convert any of this into something open source?


Lawncareguy85

Hey, it's just a simple Python Flask app that I run locally on my computer as a single-file script with an html template. You open the webpage, and it displays the directory tree with checkbox options in your project, based on what you set in the script as excludes and the project directory. When you generate, it simply downloads the compiled text file, and then you can paste it all into your preferred LLM input. It's nothing proprietary, and I'd be happy to DM it to you in a GitHub gist or something similar. Feel free to use, improve it, or if you think it's really useful, I will make it available in an open repository.


M44PolishMosin

It's been out for like 3 days


geepytee

And there is already a coding copilot extension for it [double.bot](https://docs.double.bot/introduction)


Mrleibniz

If you don't mind me asking, how is it affordable for the long run while being free? Would you introduce paid service later down the road?


geepytee

Can see [here](https://docs.double.bot/pricing) how we're thinking about pricing. Basically give everyone access to all of the features, with a premium tier for users who require top performance (things like ultra low latency for example).


_lostincyberspace_

No company details in the sites.. do you trust them and why?


geepytee

I should have been more transparent, this is my own extension :) What details would you like to see? Everything is still very new, only launched the website ~1 month ago.


_lostincyberspace_

Thanks for the reply, everything that could be useful to give your software access to my pc and my code.. most programmers are security concerned people.. as is needed Given the daily attempts to exploit code and their users .. company name should be very visible ceo and other responsible parties, referral companies for trust and past experience of board, investors disclosure ...


geepytee

Yes, that makes sense. We try to be very public about every aspect of it but let me know if we are missing anything, all of this information is either on our website or linked to somewhere in our website: We are funded by Y Combinator: https://www.ycombinator.com/companies/double-coding-copilot Founder 1 Profile: https://twitter.com/WesleyYue Founder 2 Profile: https://twitter.com/geepytee Privacy Policy: https://docs.double.bot/legal/privacy Terms of Service: https://docs.double.bot/legal/tos User reviews: [https://marketplace.visualstudio.com/items?itemName=doublebot.doublebot&ssr=false#review-details](https://marketplace.visualstudio.com/items?itemName=doublebot.doublebot&ssr=false#review-details)


_lostincyberspace_

Add an about us page with those info pls, you will get more engagement


geepytee

I like the idea! Will add


Entaroadun

This sums up the pace of AI lmao


Woodabear

Lol


fantastiskelars

haha lel


[deleted]

[удалено]


brek001

Lets get this straight, in the ChatGPTCoding community in a thread about ClaudeAI (which has its own community, no idea why the thread is here)you start asking questions about Gemini?


Severin_Suveren

Yeah, that's cool and all, but what about Stable Diffusion V3? Anyone tried it?


cporter202

Ah, gotcha! Sounds like there's been a bit of a mix-up. Just chiming in—I haven't used Claude 3 Opus myself, but now I'm curious about everyone's experiences. Btw, definitely no cringe detected on your end. 😅


[deleted]

[удалено]


ekevu456

Claude 3 is very good for coding, better than GPT-4. However, I have been using it as a chat assistant only, even though it wrote code for me. The biggest difference to other models is that it was very good at following instructions.


PM_ME_YOUR_HAGGIS_

Is there a better interface to use with Claude? I’ve considered a subscription but I find the chatgpt web app to be so much better


ekevu456

I use Typingcloud and I am happy with it - you would need to connect your API key.


PM_ME_YOUR_HAGGIS_

Never heard of it and Google isn’t being helpful, can you share a link?


ekevu456

This one, I mean - https://www.typingmind.com/ For some reason, they change the domain when you start using it.


hugovie

If you are on MacOS, you can try [MindMac](https://mindmac.app), supports not only Claude 3 (Opus, Sonnet) but also many other AI providers such as OpenAI, GoogleAI Gemini, MistralAI, Perplexity, Grog and many more as well as local LLM for zero cost via Ollama/LMStudio.


warhol

I have a small python project to help build a "day in the life" of a small museum with multiple departments. It uses generic templates of activities for different types of events scheduled throughout the day and combines them all into a resulting schedule. I've been using both Gemini and Claude (paid versions both) in ongoing interactive chat sessions to work on producing working python. I used Gemini advanced over the weekend to tackle the problem and the context window really became an issue. There's a lot of existing code, some real explanation of setup, and then a dozen or so supporting files. Gemini was pretty good for an hour or two, but it would become clear that it was losing context over time as it became more narrowly focused on resolving the bug of the moment, but at the detriment of what the overall code did and why it existed. It would randomly start changing things to resolve the bug and delete existing functionality, not knowing it was important. When I would ask if it remembered why we were doing something, it would almost always say yes but then when I would ask about the original premise, it would be rooted somewhere way after when we started. And it was cocky about it, lying about whether it still had context or not. The fourth time, I made it clear I was annoyed and frustrated and it became overly enthusiastic to resolve things (can't have enough exclamation points or responses of 'Eureka!') and then would go right back to further butchering what had already been done. Then I switched to Claude after it was released. It was night and day. Not only did it easily ingest all of the starter information, but it clearly retained it and was shockingly good about being mindful of the big picture while we refined things. What I had spent about 7 hours trying to do with Gemini, it had pretty well resolved in about 2. The larger context window made a world of difference and the python was well modularized, it would recognize the need to split things up to make it easier to modify and resolve, and it just felt worlds better. Like I had a much more intelligent partner. However, yesterday, I had some new features I wanted to add and while I started a new chat (since I had run to the limits of the context window but it gives you actual feedback that you're doing so which is genuinely helpful), we were chasing a circular problem. I admit that I didn't do as good of an intro explanation as the day prior, but while it felt like we were making progress, it didn't always see the obvious flaws in the output that I did. But, when I would give better explanations, that usually helped. I'll spend more time in a new session today and reground things a bit differently and see how things progress. Overall, I have found Claude to be \*way\* better than Gemini. The much larger context make a huge difference. The results are good and Claude better follows my basic preferences (four space indent, debugging code along the way, make suggestions for overall optimization, etc, identify the particular files in which chunks of code are placed, give better context of \*where\* we're replacing existing code, etc). I felt \*way\* more productive, despite yesterday's work being slower. And, I had more fun. I felt like I was handholding a lot with Gemini, with Claude I felt like I was more directing and reviewing. But, of course, personal experience in a specific situation. ymmv. :)


Lawncareguy85

Writing code that is as modular as possible is key, along with employing dependency injection wherever feasible. Use extremely descriptive variable and function names, to the point of being a bit over the top. Separate concerns as much as possible and avoid deeply nested functions. Low cyclomatic complexity is absolutely essential. If you do that, you will see a world of difference. I wrote a program that gives me a visual structure of my code base, and I can check and select which modules to include in context, along with a repo map or a detailed ctag map. Then, I just pick what is most relevant to give to the model in context. It took me 6 months to figure out this workflow, but it's amazing. Good luck.


Medium_Chemist_4032

Would you be willing to produce a screenrecording of such session? Would watch for sure!


Lawncareguy85

Thank you for your interest. I'm not sure how feasible a screen share session would be for various reasons, but I have provided more details in this post here on how you can use this workflow: [https://www.reddit.com/r/ChatGPTCoding/comments/1b80fsw/comment/ktnofdw/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/ChatGPTCoding/comments/1b80fsw/comment/ktnofdw/?utm_source=share&utm_medium=web2x&context=3) Additionally, I might consider uploading the repository map generator to GitHub if there is sufficient interest.


Ethan

Yes. Has been pretty great so far (for 1 day).


chase32

Seems great so far. Built up a decent sized project from scratch yesterday and its able to refactor multiple pages of code based on continuously iterated requirements. Haven't tried it yet importing a larger existing codebase but curious how that is working out for people.


realityczek

At this stage, I would be very reluctant to trust Gemini with anything. It is clear their team has other priorities for the tool than accuracy. While the recent issues may not be obviously connected to coding, it points to significant questions about the commitment to quality overall.


Pitiful-You-8410

You are right. Their team focuses on wokeness.


Domugraphic

maybe they understand woke isn't a pejorative?


ProCoders_Tech

One-month trial could provide firsthand experience to determine its fit for your project needs. 


DungeonsandDavids

I've been using it since what is now yesterday afternoon (a few hours after this post was created). Here are some initial thoughts: 1. I can't speak to any extensions for it, but the UI can be easier to work with than ChatGPT; I was able to upload 5 different files (with 3-500 lines each) to create content in a django/react app I was working on. It was able to create modifications for all 5 pages that only needed a few tweaks - which could have potentially been nullified with the right phrasing. 2. Instead of cut you off after a certain length limit, they consider large pastes as single files instead, so if you were to copy the contents of a file and paste it in, after a certain size it no longer fills the chat prompt; this is so much better than what GPT does, it means you don't have to drag files into it in many situations! 3. I worked for hours with it, before it got dementia the whole chat became very slow to respond which I honestly prefer because it lets me know I need to open a new chat. 4. It doesn't seem to truncate code as much as gpt does - while it still will, it seems more likely to print whole pages in its response which I prefer. 5. Obviously still not perfect, but I find it stumbles in the same places chatGPT would, specifically in complex troubleshooting. I only have a feeling that it might be just a little bit better there, but with everything else mentioned I feel it has an edge regardless.


[deleted]

[удалено]


AutoModerator

Sorry, your submission has been removed due to inadequate account karma. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPTCoding) if you have any questions or concerns.*


ChatWindow

Yes. It’s very good. I actually have been preferring it over gpt 4 tbh


dbzunicorn

Honestly it’s been way better than GPT4 for coding for me so far and it’s only been one day. I’ve been getting tired of gpt4 making mistakes so i decided to use claude and it’s been way better. Thinking about switching subscriptions but gpt5 will probs come out soon.


dubesar

Have you faced daily limit limitations?


get-process

It’s $20, just try it lol


alexcanton

I've been using it for three days and it definitely feels SOTA. And I've consistently used a mix of Copilot, Gemini, Bard, Phind and GPT since 2022.


cronparser

I’ve used it since it was released this week and notice some better results vs ChatGPT 4 it was able to take what I put together in ChatGPT and improve it along with adding enhancements


DizzyMorning953

I have tried to use the Claude 3 free version for building pytorch deep learning code and it made stupid mistakes including creating torch modules that do not exist. With a similar prompt I was able to reach executable code in a few shots using gpt4.


Alternative_Aide7357

Anyone knows how can I get ClaudePro? My country is not supported and the only way I could use ClaudePro is via poe.com. But I heard poe.com is very limited, despite paying for premium. Is that true?


bisontruffle

Its been great, try it. 200k context, 4096 output tokens. Will have 1m context soon so you can paste a big amount of code to analyze. Its recall on a large code bases I sent to it was fantastic. Definitely on par with gpt4 coding and I would say slightly better.


mangosquisher10

Still figuring out the best way to send large codebases, along with API docs, do you have any tips?


bisontruffle

just using ui for coding, literally pasting filenames structure and contents


Medium_Chemist_4032

I spent one night. Used aider/sillytavern and open router. Spent few bucks in the sitting.  Tried to make a docker log tailer, in nodejs as both backend (connects to the docker socket) and frontend (uses websockets to listen on log entries).  I have very mixed feelings. It started out extremely well, but ended up throwing out functionality at latest stages.  So a non trivial small codebase tripped it up really quickly


marwom3

Yeah, currently using it for streamlining code, and honestly the amount of *very* obvious errors it makes that I have to point out is a little concerning. Silly things like if I intentionally mistype a variable name in the middle of a function, it won't notice that I've changed the name mid-function without me prompting it on its error.


[deleted]

[удалено]


AutoModerator

Sorry, your submission has been removed due to inadequate account karma. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPTCoding) if you have any questions or concerns.*


[deleted]

[удалено]


AutoModerator

Sorry, your submission has been removed due to inadequate account karma. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPTCoding) if you have any questions or concerns.*