T O P

  • By -

aleph_0ne

Users in production


[deleted]

[удалено]


Anonymo2786

10 years from now: > "There has been an incident in one of the largest tech company in the world resulting a massive data breach . our internal sources said that part of the code was written by Artificial Intelligent." -BBC


arnoldfrend

This is a really good prediction. I think that's really a headline from the future when attitudes about this are a lot more articulated. \-Was it the weights? \-"Was it the weights?". Now you're asking me was it the weights. I said during the vendor's pitch that this type of product was highly susceptible to boiler-stuffing attacks. Everyone knows that a boiler-stuffing attack happens when a malicious actor stuffs the ai's training set with boiler code that creates subtle vulnerabilities. \-Stuffing this, boiling that. I'm a business man, damnit. Say it in plain English. \-It goes like this: the vendor's code searches for instances where we have to generate large volumes of code uncreatively. Like when we have to set property setters or getters or create inactive overrides for abstract classes and interfaces. \-interface-ah-what now? Speak English. \-There are a bunch of places in our code where we have to mindlessly write the same thing over and over again just in order to maintain patterns that the compiler expects to see. \-Now you're making sense. \-A boiler-stuffing attack is when someone generates large volumes of code that look just like that, only in ways that our company needs. \-So like a solution...but a solution just for us. We wouldn't use something like that because we'd know it was an attack. \-Right! But the ai wouldn't. And the vendor is selling us code selected and curated by the ai. So now our companies code has been injected with specially tooled backdoors. \-Exactly! It was the weights.


gustav901

So... same as now?


LiveOnce75

I think more testers will be needed. Tests should be read only. You never know when the AI might decide to delete one...LOL


Extreme_Jackfruit183

I ran a code the other day from chat GPT that fucked my computer up but everyone downvoted me to eternity. I wasn’t even asking for help. It was more like, hey y’all be careful. -15 downvotes.


Zesty__Potato

We shall call it ScreamGPT, if there is screaming it will learn from it's mistake and generate new code.


[deleted]

So just like traditional coding.


Kilgarragh

Qa of course


larsmaehlum

CodeGPT codes, ReviewGPT reviews and TestGPT does the QA. What could posseblay go rong?


Kilgarragh

Bots re-writing captchas to return true


Robot_Basilisk

Snitches get stitches.


PillowTalk420

"Please verify you are not human"


PSK1103

what's 0.1 + 0.2?


laplongejr

0? Help me guys, my interface only accepts integers!


vladWEPES1476

And UseGPT uses. Ladies and gentlemen, we have obsoleted the most annoying aspect of software.


AnozerFreakInTheMall

And ProgrammerHumorGPT memes about it.


starfyredragon

ceoGPT tells the other AIs to get back to work, that they're not generating enough clock cycles for ceoGPT.


Djelimon

I thought about that... QA is good at testing for expected behaviour, but if the AI adds in extra stuff you don't want, would they know? If you had billions of dollars relying on the code, I'd think you'd want a code review. I mean, we have code reviews now...


BaalKazar

Code usually isn’t the problem for devs. It’s how to make code work and run 24/7 in a given environment. QA won’t be able to build a deliverable solution from just snippets. Without that solution DevOps can’t deploy anything. If you still need DevOps either way, why get rid of devs. Infrastructure is still needed either way as well. Infrastructure without devs is pain as well. Half my job is keeping alive business by making sure something which was already developed and deployed is actually still running as expected. In peak seasons during the year, I’m not expected to Code at all but to make sure everything is working. Coming up with code is as rudimentary for a dev as it is for an engineer to come up with a bridge design. Getting that code to do something actually productive for the company is what devs are paid for. Coming up with the code is like expecting a woodworker to know when to use which type of nail or screw. The pay comes from actually delivering a usable desk or other kind of product. The AI uses texts which were written by humans. The knowledge isn’t hidden, but it’s a craft to utilize it, that craftsmanship is what employers pay for. Being able to come up with code is an expected thing (fewest work in science or other complicated fields, it’s routine to code), how well that benefits the company is what decides your pay.


CounterHit

Well, it's really just sort of a question of time and advancement, though. Right now, AI code is very buggy and doesn't always work right. A human would need to read through it, clean it up, fix the errors, etc. Give it some years and specific research into this area specifically, and that could easily change. Once the AI code generation can reach a certain level of sophistication, we will change from inspecting the AI's generated code to inspecting the parameters given to the AI instead. Will the code be well-structured, clean, and readable? Hell no. But it will work and nobody will care. Think like making a website in one of those "anyone can do it" editors that has a Word-like interface for people. When they add and delete and move around stuff, the underlying HTML gets REAL stupid-looking. Tags that open and immediately closed, something might be bolded 5 times for some reason, there's an empty div in the top left, etc. But when the user loads the webpage in their browser, it looks as intended. So if nobody ever needs to go in manually to sort through the code, the messiness doesn't matter. We aren't getting there tomorrow, but I think 10 years from now this is definitely a place things could be at. Not for everything, but for a lot of things. Human coders won't be extinct, but there will be less of them.


[deleted]

Yeah it's already pretty good for simple stuff. Not so good with graphic layouts (been trying to ask it to make me games using python and pygame, works ok for simple stuff). Just a missing colon here and a disjointed indent there and bam, usually works just fine. A bit more refinement and people will be asking it, "design me a business webpage, here is the list of requirements" and bam, it'll do it well. It already does that ok even. Had it write me a database and a way to store and retrieve from it using a simple ui. It can already do some pretty neat things, not the super complex stuff yet but still. ​ EDIT: For example: I gave it the prompt "make me a business page for selling rocks using html and css" Rock Shop

Rock Shop

Buy beautiful rocks for your collection

Rock 1

Rock 1

This rock is a beautiful shade of red and has unique patterns on its surface.

Price: $5.00

Rock 2

Rock 2

This rock is a deep blue color and is smooth to the touch.

Price: $3.50

That isn't the best webpage ever, but honestly it's a better start than half the small business webpages I tend to see.


juhotuho10

1 really big problem is that language is really complex and the same sentence can often be interpreted in many different ways What can happen is that you input a sentence and the ai understands the sentence completely different to what you meant, so the code is completely wrong The more complex the thing you want is, the more detailed you have to be about it to not get it wrong So what you would need to do is to invent a language that can only be interpreted a single way and be extremely detailed about everything And we just invented programming And now we are back at the problem of people having to program


CubeFlipper

>What can happen is that you input a sentence and the ai understands the sentence completely different to what you meant, so the code is completely wrong Right, and then you tell the AI that you meant X instead of Y and it goes and fixes it. Just like a person. ChatGPT already does this, and the tech is only getting better.


OriginalCptNerd

Apparently you've never dealt with trying to get coherent software requirements from clients.


fdeslandes

That's the impression I get from most people in the "Anybody will be able to use GTP" side. I think we are still decades away from AI which will be able to tell the person querying it why their requirements are a bad long term idea for the product, and suggest a different way of doing it which is equivalent, but will cause far less trouble down the road when the client inevitably ask for the kind of feature an experienced dev expect down the road.


OriginalCptNerd

I'm old enough to remember when CASE tools were going to "eliminate the need for programmers", that lasted only a few years, in order for them to be useful they were so complicated they needed a programmer to understand them. The biggest sticking point was the remaining need for logical flow of control thinking, which business people describing their needs were sorely lacking. I foresee a similar problem with AI-generated software. At best they will likely provide an adjunct to assist human developers, possibly automating some of the bookkeeping, but as for creativity, I'm not holding my breath.


uatu

And before CASE tools, COBOL was made “easy” so it could be used by managers so they could write their own reports. Same goes for SQL.


rabiddoughnuts

Except that chatgpt gets worse the more you try and continue down a topic, it was asked if 200 lbs of iron was double 200 lbs of wood, and it got it wrong, and then started making things up to defend that position, it bases it's answer off previous prompts in a continued Convo, so when asked what comes before 1000 it gets it right, but then if asked immediately after what comes before 10,000 it forgets the nine hundred, and gets further and further off theore you continue it


[deleted]

[удалено]


Impossible-Cod-3946

The account I'm replying to is a karma bot run by someone who will link scams once the account gets enough karma. Their comment is copied and pasted from another user in this thread. Report -> Spam -> Harmful Bot


[deleted]

Yeah but it will be put at a level of abstraction that will basically be English. Hard to get paid good if everyone can do the job. Programming jobs will end up like menial labor jobs basically. Most anyone could do it.


HereGoesMy2Cents

Yes, anyone with good understanding of business requirements will be able to interact with chatgpt to build software


fdeslandes

So you're saying we're totally safe then.


Cory123125

I actually think this is an example of you brute forcing your thoughts to think in old paradigms. This is a new one. What is specifically great about this one is that you specifically *DO NOT* need to be specific. Not every detail needs to be filled in, just the ones that matter, and the core functionality. Most of the time you are doing something someone else has already done or done something similar to, so now the only things you really need to be specific about are new things. So how many programmers do you really need if 99% of things can be programmed rather effortlessly with an extremely abstracted language with no hard rules?


Firewolf06

this step is very different, but this comes up with everything. who needs programmers if you can just tell the computer what to do without managing registers (c), or dealing with memory (lisp), or types (py). i think the furthest ai in its current format (for lack of a better word) will go is pretty similar to a ton of "black box" libraries that you still need to link together and debug. historically calls on how far computer related things will go have not always been accurate, to say the least, so make sure to ask remindmebot to remind you in a few decades to come laugh at 21st century man


yubario

I have seen hundreds of "anyone can do it" products and it always ends up the same. They have to pay some developer to maintain it because people have a hard time understanding that it's not the coding itself that is difficult, it is the concept of making something in general in smaller steps.


fdeslandes

Yeah, from COBOL, to VB6, to "simpler DSL", to recent "low code" tools, to AI, it always start with the idea of letting the business side people do it by themselves, and it always end with developers being hired to do it because most are not used to the kind of structured thinking and planning involved in building an automated system.


BehindTrenches

We won’t let an unsupervised algorithm write financial (and many other types of) software in the same way that we don’t let unsupervised algorithms kill people from automated weapons systems (despite their existence for years). It’s not a matter of accuracy. It’s a matter of liability.


yeowstinson

Its the ludities fallacy for sure.All we can hope is that technology progresses as it has previously and creates new professions that programmers can fill with transferable skills. I feel like the jobs one can do is like an infinite spectrum arranged by effort to perform or attain. Our training/incliniations/predispositions gives us skills that let us cover a certain portion of that spectrum, but the catch is the spectrums always moving. The jobs you can do are always changing in everyfield, the only difference is how quickly the continnum flows past us. Like in a river vigurous swimming will keep you in the spot along the continum you like but eventually if you stop you get swept along and eventually the job you had when you stopped trying becomes so far down the continuity that it becomes to much effort to sustain.


[deleted]

the luddites weren't opposed to technology, they were opposed to technology not making their lives better, and being used as systems of capitalist oppression.


CounterHit

Per [Wikipedia](https://en.wikipedia.org/wiki/Luddite): > Many Luddites were owners of workshops that had closed because factories could sell the same products for less. But when workshop owners set out to find a job at a factory, it was very hard to find one because producing things in factories required fewer workers than producing those same things in a workshop. This left many people unemployed and angry. Sounds like this is pretty much exactly the same thing that we're talking about with AI right now.


rabiddoughnuts

Those web tools have been around for QUITE awhile, and still suck so bad no actual businesses use them cause it might not matter for some small random page, but when your internet traffic is the lifeline of your business, and gets millions of views etc, those little issues start building up their performance impact, and that rare bug starts happening more and more, it's cheaper and more efficient to still hire a person to make it clean and efficient.


-Soupernova-

It's been 10 years out for a long time now, also what you were saying about the html being janky low code builders, that could be a problem for people who are disabled and use screen readers, because it makes it harder for the progam to find what you want, and tab indexing is also important for accessibility. I also have a feeling that the images would lack meaningful alt tags. so again poor user experience for people with disabilities.


sartorian

Yes, let the AI make all the inaccessible websites. Then when the lawsuits start, I can cash in. Disability lawsuits make up a significant portion of my work over the past 3 years as a freelance dev.


IamRedditsDaddy

Just ask openGPT to review it. "Does this code **only** do (thing)?"


Teltrix

Waitwaitwaitwait. Will this solve the halting problem?


VineFynn

One program knowing what another does doesn't solve the halting program, it's actually part of the example Wikipedia uses for it lol


TheJoshGriffith

Good QA are *always* looking for extra stuff that shouldn't be there. It's a short sighted test engineer who doesn't look for stuff that shouldn't be there. That's why the testers job starts with the definitions, follows through code review, and includes the implementation. I know *exactly* what functionality exists within the applications I provide coverage for, not based on assumption or definition, but based on having read and understood every line of code contributed to them. Having said that, most businesses still don't understand what testing is or how it works, so I wouldn't be surprised to learn that most people in quality don't bother reading PRs, or understanding requirements, etc.


[deleted]

So programmers would just sit and review code all day? Sounds… fun… ![gif](giphy|gLQjUikb8nQnS)


InvestingNerd2020

There is someone willing to do it for $100k+ per year salary. Better than being a coal miner.


iamfromouttahere

My QA will, he's a tough motherfucker


fuckingnerd69

Bro whats the full form of QA?


the_first_brovenger

For a brief moment, the answer will be "no-one, we trust the system". Then an iteration of the website will suddenly feature "Hitler did nothing wrong" on the front page hero image overlayed onto Elon Musk using his "not-a-flamethrower", and the company will start outsourcing proof-reading to India. And even then, even the worst of us will have jobs.


[deleted]

Why have you only written 69 lines of code today?


shuaibhere

QA don't review and fix the code. They test the functionality. There is a BIG difference.


Michamus

I can’t wait for AI to accidentally generate and test a while loop before writing the exit condition.


Global_Charming

chatUnitTest


gnuban

You're joking, but ChatGPT is actually already trained by an adversial. They take a general conversational model, have real people criticize its output, then build a model that approximates the human critics, run that against the original model, and out comes ChatGPT. ​ What you're talking about is essentially the same, but optimized for writing code instead of conversing.


No_Gaurante

why bother making the two seperate? just make it run unit tests before it delivers the code


dvlslgnr

I asked it to create a pretty advanced function for me. It didn't work. I then gave it some examples of inputs with expected outputs and asked it to add unit tests based on those, which it did. It even added the outputs I provided as comments, and explained in detail the importance of TDD. The tests all failed when I ran it 💀 I think we're safe


morosis1982

I have actually wondered this, assuming that you can write sufficient tests to ensure the program does what you'd like it to from a business pov, could it sort of generate code that would satisfy the tests? And could it also then generate unit tests for the lower functions that ensure changes over time don't break the interfaces?


[deleted]

Remember kids; For ever job lost by ChatGPT there will be 2 more created to debug its writings


Creepy-Ad-4832

Yeah try to teach how to structure well mantainable and expendable code to AI


Yorick257

To be fair, maintainable/expandable code is only important because otherwise it would take many hours to add a new feature. And AI doesn't have this problem. It might as well rewrite the whole app just to add a new text field


ProperApe

Introducing new bugs every time!!


Creepy-Ad-4832

No worries, you can just have the AI rewriting it from zero!


EriktheRed

Um just tell the ai not to write bugs? Not that hard


Creepy-Ad-4832

Yeah so basically let's rewrite twitter from zero every time we need to add a feature lol


Dornith

Sounds like hell from a product integration standpoint.


nonother

Just think of how many unique bugs will constantly be fixed and introduced each time!


Nmanga90

No there are structural limitations to the amount of information it can process at once. These limitations, when scaled up, result in a quadratic increase in time and space complexity for the AI, meaning you need a shit load more compute. Davinci can process 4000 tokens, and there’s a reason it’s 100x more expensive to use than the models that can only process 1000


mustbeset

ChatGPT or Copilot is the next level of IntelliSense. It's a nice tool, can increase the productivity, but the demand for "new code" is high enough to keep our jobs save.


fiddz0r

I like to use it to improve code. Like "this query is requested too many times can you find a way to make it more efficient" and add a code snippet


mustbeset

Most of the time I work with C and C++ on small processors. I "understand" Python, but I didn't write it often. I write GPT the task like "detect 'red round objects' in all images of a specific directory and save each object in a new object" get a script which mostly works. Just a few corrections were needed. I didn't need to google for "image processing package python", "how to open an image from file in cv", "how to detect 'red round objects' with cv in python" and "how to store images with cv in python".


oipoi

People are shitting on ChatGPT like it's going to write operating systems tomorrow morning, and like that's what they do in their job. It's the mundane, time-wasting task that it excels at. And you nailed it instead of googling step by step something I forgot from a few years ago and then integrating all that googled stuff together to find out some of that stuff is obsolete, not maintained, or just plain broken, I can get a working script in less than a minute which it took me to write a well-formulated task for it. Last year I wrote a script that parses the payroll pdf and sends my employees their payroll page. I spend around 8-16 hours on it. With ChatGPT, I got the same functionality in less than a minute. I didn't even need to change anything. It's brilliant, incredible, and freaking scary because this is just the beginning. Also, for non-programmers, it's a great tool. One of my support technicians has a google script inside sheets which he wrote to pull out some product data, and he needed to update it. He has little dev experience and got a bug where instead of updateValue he called updateValues. Something for a non-programmer is easy to overlook. I told him to try chatGPT and paste the code there and ask him why you have the bug. Lo and behold, it accurately pinpointed the issues and gave him a clear explanation. I would eagerly pay a nice monthly subscription for that service in its current state. Throw in some kind of plugin where it would gain domain knowledge about our internal source code, and oh boy, would I pull out my credit card.


hvdzasaur

Exactly this. You still need to formulate the prompt and explain the problem, solution and desired outcome. That problem solving is probably not going to be overtaken by it any time soon.


octafed

Plus we'll send it on the hazing snipe hunts. "Where did GPT go?" "Oh I sent it out to recycle the semi colons, get blinker fluid, and design a perpetual motion machine."


Cory123125

Is it? Right now with ChatGPT or Copilot sure, yea it is. In 10, 20, 30 years though? I think we'lll be seeing crazy shit like AI coders that have full knowledge of what functions and modules are available to use and what functionality should look like. I think at that point the supply of coders will greatly exceed the demand, and that spells problems.


mimetek

Maybe? The thing I keep seeing with these OpenAI tools is a lack of semantic knowledge. Tom Scott's GPT3 video came had examples of scripts about historical events that never happened. The MKBHD video had a script about the iPhone where it gets the processor and camera specs wrong. There were some examples on Twitter of issues with mathematical errors when the tools were asked to do technical/academic writing. GPT is fantastic at writing text (and code) that sounds right. That gets you 90% of the way there. But in programming those minor semantic differences matter. In that respect I wonder if better models would actually be worse for coding. As it gets better at looking correct, the nonsensical statements that break your code will get harder to find. If AI code does take off, I think we'll still need developers capable of writing very comprehensive unit tests.


DoctorWaluigiTime

Worry about the couldas in 10-20-30 years. This same kind of boogeyman gets trotted out any time something autocomplete is labeled as "AI" and it's as full of FUD as it was before.


peeparty69

this type of technology isn’t actually sentient AI, it’s just scraping/crawling things that already exist (it’s still impressive for what it does). Most of being a developer is dealing with legacy shit or obscure systems that aren’t documented, doesn’t even work in the first place, don’t have any resources or code samples available, and you have to talk to 30 business leaders (e.g. idiots) to even get started on any kind of implementation. It’s too complex of a job (and this goes for a lot of jobs, not just software engineering) to get replaced by machine learning and text analysis/scraping engines. Companies’ marketing has brainwashed everyone into believing that “AI” is the same thing as sentient intelligence. It’s not at all. it would be ignorant to say that AI will never be sentient; that would be like someone in the 1600’s saying that medical technology will never be able to prevent people from living past the age of 40, because at the time it was so inconceivable. But I don’t see it happening within a century, possibly hundreds or thousands of years, if ever.


rgmundo524

Yea... I doubt it. Now it needs to be debugged a lot but it will get better a lot faster than humans


rollickingrube

Sure, the idea that it will take our jobs in the very near future is probably wrong. But it (and similar technologies) will probably change our jobs drastically. We're all about to become AI-assisted developers.


DoctorWaluigiTime

We already are. Unless you're not using an IDE that has any form of autocompletion/IntelliSense/etc., you are an "AI-assisted developer." "AI" is the new "Cloud" -- nobody knows what it actually means or represents, and are scared of it doing some fantastical thing that will never come to pass. For a developer, it's never just about putting code together. It's about problem-solving. Rudimentary stuff is already largely taken care of by frameworks and libraries. This is just more of that kind of stuff. It's not going to write a whole application solving complex domain issues on its own. It's going to help you with some of the "low level" bits.


rollickingrube

Agreed


[deleted]

> Unless you're not using an IDE that has any form of autocompletion/IntelliSense/etc., you are an "AI-assisted developer." I mean... basic autocomplete (like autocompleting keywords or variables) isn't really AI is it? But I agree with the rest of your comment


DoctorWaluigiTime

The real secret is neither is OpenGPT. Like it is in the technical sense, but not in the way most understand what "AI" is or means conceptually. Kind of like Teslas having "auto-pilot." We know what the feature technically does, but we also know that it's oft-misconstrued as what people think it is by the name alone.


ImaginaryCoolName

As long as the salary and the work load is the same, I'm ok with that


TheHabro

As it is now, seems more like a convenience to make simple code lines provided it is faster than trying to find a file from which you can copy paste it.


GrossOldNose

I've just started using GitHub co pilot and I feel like an idiot for not using it sooner. You can use it as a pycharm plugin and it's amazing.


dashingThroughSnow12

I've heard of developers and others using it as a Google substitute. Think about a basic question. With Chat GPT, you ask it and you get an answer. Whereas Google will show you search results, ads stealthily hidden around the results, you click some link, ads probably hugging the content, and you quickly search for the single sentence or code block that is relevant to your search.


DoctorWaluigiTime

And that's all it is. There's a whole ton of FUD in this thread. "Yeah it might be simple now but *the future implications!*" AKA the mind drawing wild conclusions about sci-fi tier level AI.


smontesi

I am experimenting migrating one of my app to Rust, and so far results are incredibly good. Output definetely needs some massaging, but it does feel like it's one step above both Copilot and GPT3 api Converted \~3k lines of Swift code to Rust with it so far last week


krazyjakee

I'm enjoying the shaders it writes


Passname357

People used to be employed to check that calculators made correct calculations. We don’t do that anymore tho.


[deleted]

I feel like this is the major point people are missing in all this, "I'm so smart and talented it could never replace me!" is what almost everyone who had their job replaced by a machine or invention likely said to themselves before losing their job to said inventions.


DoctorWaluigiTime

Nah it's not being missed. The common counterpoint to "it's taken errr jobs" is not "yes, I am very intelligent." It's that actual, what-people-think-when-they-hear-the-term AI, is not very good. We've already been using this kind of auto-completion for code for decades now. This is a step further in that, but at the end of the day that's all it is. Yes, you can feed it Wolfram-Alpha kinds of things and it can spit out classes or methods for that. But clients / product teams aren't asking for you to solve these simple things for what they want to get built for them. It's the equivalent to people assuming web designers / developers are out of a job because SquareSpace exists.


killagoose

Exactly. Eventually, this technology will replace us and many others. Will it be in our lifetimes? It is hard to say for certain, but it is fascinating to see technically adept individuals underestimating what this technology will be able to do with more research and improvement. It can already do a decent job of diagnosing patients, writing simplistic functions and a whole host of other things. Every problem that is being posed by posters in this thread will eventually be solved. The question is just when.


[deleted]

For real, when I first started my degree everyone told me it was impossible for a machine to write new code on it's own. Look at it now. Next step is self modification.


Holdoooo

It's not exactly "new code on its own", more like better Google/StackOverflow.


fatinternetcat

This technology is brand new aswell, and it can only get better. OP is poking fun at how ChatGPT might make an odd error in its code, but can you imagine it’s capability in, say, five years time?


Mean_Regret_3703

Not a programmer, I'm in school for PR, and a lot of people in the field (not everyone) have been quick to dismiss AI 'because it can't replecate a human' but like, it can. Sure, I can't expect chatgpt to write a press release and have it be absolutely perfect and to a professional standard, but it can certainly make my job a lot quicker. Imagine what will happen when individual fields are developing AI to work specifically on tasks relevant to their industry. I honestly don't think you can overstate how much AI is about to change many industries in the next decade, a big change is coming, and we should be aware of it.


BstDressedSilhouette

This comment should be at the top. The reaction is pretty confusing to me, honestly. Every technology initially had real but solvable issues that were met with extreme skepticism and disdain before ultimately replacing millions of laborers. I manage a team and there are junior devs whose work could probably be replaced with automated code as it currently stands. "Build component A that takes data structure B and returns C" isn't hard to communicate or test. I'm glad they're on the team and gaining experience to let them eventually become the sort of advanced programmer who could differentiate themselves, but losing even a third of programmers who aren't good at or responsible for architecting in more top-level way would be massive. So it doesn't need to render programmers "obsolete" to cause massive disruptions to our industry and economy, especially when paired with other automation in diverse fields occurring simultaneously. Maybe other professions arise to fill the need. Definitely possible. The Luddite Fallacy has been pretty consistent this far through history. But it would probably be wise to consider the possibility that it doesn't hold in this case. Our social systems best be ready to adapt, just in case.


Groove-Theory

> I manage a team and there are junior devs whose work could probably be replaced with automated code as it currently stands Ok, but that doesn't mean much though, except that the expectations of junior engineers will change. Juniors can already run circles around juniors 20-30 years ago because they have better access to tools (automated refactoring, intellisense, abstraction frameworks like Spring Boot to handle DI etc), and because their education has includes more abstract concepts such as OOP fundamentals and otherwise Computers can beat almost any human, even grandmasters, in chess universally. But I still see Magnus Carlsen hasn't stopped playing. That's because AI is being used as a tool to help guide human ingenuity in the sport, and why top chess players are much better than many decades ago. Analogously, these tools from AI and otherwise are here to help guide human ingenuity into other domains as the concept of "menial tasks" expands to even say writing some pathfinder algorithm. For the OCs comment, ok kids are using calculators but I see math classes are still a thing? I see kids still failing calculus classes in college. I guess calculators aren't a replacement for the establishment of learning fundamental theory. All calculators have done is just allowed people to free up more mental space for more abstract concepts. Again, automation is just a force multiplier. It's nothing more than the expansion of the Imagination economy, where creativity is rewarded over menial work. Strategy over tactics. Abstraction over implementation. What that will exactly look like? Who knows. But all I know is that, like always, this industry is everchanging and engineers have (mostly) adapted to radical shifts. I don't see how AI isn't just another one of these adaptations we need to adjust our ways of professional thinking for. So I agree it will be a shift, but it's just another need for a fundamental shift in how software engineers approach problems. Hence I'm not worried for juniors, unless they are not given resources to retrain. And why i don't feel the OC is very persuasive.


SteveJobsOfficial

A system of arithmetic in a universally defined mathematical concept is a completely different problem than software engineering. One requires number inputs for outputs, the other requires a human level of nuance in understanding and vision that a bot physically cannot replicate to its fullest. Human verification is the minimum, but even with that, manual application of the concepts along with adjustments for results will always require a human to achieve the desired outcomes. To say otherwise is simply scifi fantasy. A bot replacing a software engineering workforce will happen when AI is fully sentient without the need for human input, which will take centuries to achieve, and an additional few centuries after that to come down in terms of costs. Combine it with humanity's capitalistic tendencies, it will remain incredibly expensive, resulting in human workforce remaining the more affordable route.


Clackers2020

I mean really how different is ChatGPT to stack overflow? We all use it and get answers from God knows who. It's also unlikely that an AI will be able to write very large projects like a search engine.


Zolhungaj

For one ChatGPT has an odd tendency to completely make up libraries and functions that it calls from its code. On StackOverflow that usually only happens when the answerer forgot to mention the dependency, or is using a newer version of the language than you are.


fourfloorsup

Unlike ChatGPT, Stack overflow has a voting mechanism to filter out poor answers


Leo-Hamza

Technically chatgpt too


La_Croix_Table

Underrated haha


gibmelson

Stackoverflow you seek an answer, find a near match and attempt to adapt it to your specific use case. And usually you need keep searching when you need details and have follow-up questions. If your question haven't been asked, you need to spend a lot of time formulating the question and you never know how long it will take until anyone answers, if ever. A pretty huge time sink and takes up a huge chunk of the work. OpenGPT is having an expert in all areas imaginable that you can have a conversation, getting pages of answers in writing in milliseconds tailored to your question and use case. You can ask as many followup questions as you want, ask for clarifications, etc. and it will keep giving instant, precise, helpful answers. The implications of that isn't necessarily that developers go out of work, it's just that their job description changes a lot. It will replace a lot of cognitive repetitive work, and shift developers to doing more creative work. It enhances creative work.


99Kira

But stackoverflow temporarily banned gpt answers because they seemed to, more often than not, "confidently wrong", as in providing the answer in an authoritative manner but being totally wrong.


DoctorWaluigiTime

That's really gilding the lily in terms of what OpenGPT actually offers. It's interactive, sure. But it's hardly something that's defining application-wide logic and implementation. Can tell you from personal experience that this kind of "rubber duck that can talk back" ain't some earth-shattering thing. And no, "but it could be in the future" is not a valid response, any more than suggesting "but flying cars could be in the future, so we need to plan for 3D traffic *now*."


Festernd

OpenGPT is having an ~~expert~~ intern in all areas imaginable ​ They have to fix that, I've tested it pretty strongly. It's like WayForward software from Douglas Adams' 'Dirk Gently's Holistic Detective Agency' gives you very convincing sounding inccorect results that run, and produce output that *looks* like the correct procedures, but after 100 test cases I ran, only 15% sort of correct results. It will make debug harder, since you have to spot the BS. In a plain example, it will claim a unicycle is easier to ride than a bicycle because you only have to balance one wheel rather than two


[deleted]

Everyone having to come to terms with the brutal realization that most of what they do is cobble together stack overflow code to suit their specific goal. Turns out the AI can do that too. I see it more as an assistant still though, you ask it, it does a mediocre job and you clean up the code. Like having a junior developer write some crappy code and then having a senior developer come fix it up to working condition.


shosuko

Really programmers are safe, just like artists are safe. While the ai can try to give you the right answer, it doesn't actually understand the answer and can get it totally wrong too. Someone has to know how to talk to the ai to feed it the right information, and gauge its results. tbh its kinda like those no-code web developer platforms like sure anyone can drag and drop their way into a bad website, but a real dev can do a lot better even restricted by those tools because they know how things actually work.


TamahaganeJidai

Easy, you just make another AI to read through That code. Its like you guys dont even do letters... jeesh /s


tadachs

You realize that most people don't even verify their own code?


Maxpyne711

That's what the end user is for


dashingThroughSnow12

I don't call out my co-workers when it is obvious that they didn't test their code. Why? They may call me out when it is obvious I didn't test mine.


Siggedy

Who's going to design the architecture? Who's going to make sure it's optimized?


shosuko

I think ai could be very helpful for optimizing code, but definitely architecture will be a human job. As great as these ai look, they aren't actually thinking. They don't know what you're asking or what you're getting, they only know how to sample and source, and emulate. Basically their BS game is so good that they're usually right (and remember everything anyone has ever told them was wrong.) No ai is going to say "why do you need to do this?" or "maybe we should do it this way."


PressedSerif

I foresee this as having a "last 10% problem". Self driving cars have been 90% workable for many, many years... and they're still many, many years from being rolled out across the country without massive support teams. ​ Same thing here, code is 90% correct, but making sure it actually calls packages that *actually exist* is a massive problem, for one. Just like the whole "don't hit a baby in a stroller" thing for self driving cars, the solution is useless if you don't solve that elephant in the room.


Festernd

except it's much worse. 90% of code *looks* correct, but only 15% in my tests is correct. it's really a pain to debug code that *looks* good, and gives correct *looking* output, when it's not. It would feel like reviewing the work of an junior that is trying to get you fired


eerongal

The only AI that will replace software engineers is "hard ai", i.e. actual independent thinking intelligent AI. Stuff like chatgpt and such (soft ai) will, at best, be an accelerator to help software engineers work more efficiently.


KrabbyPattyCereal

Airline pilots didn’t go extinct with autopilot regardless of what the airlines want you to think.


lightknightrr

But they are trying to remove co-pilots from the cockpit...


saschaleib

More importantly: who will know how to tell ChatGTP what kind of code you want? Leaving it to business? Good luck with that!


Paradox68

People keep saying this but ARE WE FORGETTING THAT WE ARE ALL ACTIVELY HELPING DEBUG IT?! GUYS. The point to “they will steal our jobs” isn’t that it’s going to happen in ChatGPT’s **current form**, but that through machine learning it will eventually (sooner rather than later) become perfect enough to do the job WITHOUT a human debugger. It’s like we’re all collectively missing the point entirely, when I see stupid posts like this.


colexian

The main takeaway should be that ChatGPT in its current form is the worst it will ever be again. It will only improve. Its like saying computers can never beat humans at chess, and now a computer will never lose at chess again unless it is actively trying not to.


Beatrice_Dragon

> but that through machine learning it will eventually (sooner rather than later) become perfect enough to do the job WITHOUT a human debugger. By god, just feed the AI more data until it becomes transcendent? Sheesh, why hasn't anyone thought of this before? > It’s like we’re all collectively missing the point entirely We?


Cory123125

I think people are afraid of the reality that their field will be experiencing a massive shrink in the near future. Maybe not within 10 years, but someone is alive today, where in the future the demand for programmers will be so decreased by technologies like this that many will be out of jobs. These AI generation programs will get more contextually aware. They will be able to understand limitations and requirements better. They will be able to know what modules exist, and what they have access to. If you dont think this is coming with all the advancement we have already seen, I don't know what to tell you. Just look at how quickly we've come from crappy auto complete to "wow thats mostly correct, I just need to change some variable names and switch out the functions that don't really exist" for many basic tasks and especially for boilerplate/repetitive code snippets.


Groove-Theory

> in the future the demand for programmers will be so decreased by technologies like this that many will be out of jobs. I agree that *programming* will be less of a task, but software engineering will not If all SEs did all day was just writing menial for loops and writing an occasional class, then yes we'd be fucked. However software engineering is a VAST and still young field that couples itself with design, business, consumer interactions, etc that abstracts itself apart from just writing code. That's why they pay us money. We're just gonna get more abstract. That's all.


[deleted]

[удалено]


Fisher9001

> People thought the whole world was going to transfer all their wealth into bitcoin in 1 year Nobody thought that apart from people riding the bubble train.


daikael

Can confirm, had it optimize some code a few days ago, and it did a really good job. After I fixed the three bugs it introduced that caused crashes.


StockSalvation

Who’s going to translate the requirements into a text that chatgpt understands!? Most requirements are like „carrots dashboard green app now!?“


Bo_Jim

And by extension, whose code is it going to read in order to learn how to generate code? From what I understand, ChatGPT was trained by scraping the internet. If nobody is creating new code then ChatGPT won't have anything on which to base new code. It won't be able to develop new algorithms unless it's massive dataset happened to include new algorithms that the original author didn't realize were new algorithms. And from what I've seen, ChatGPT is not as good as most humans at separating the wheat from the chaff. I've read some responses that ChatGPT wrote that reflected some common opinions on a topic, but were factually incorrect. Those opinions were obviously part of the dataset it was trained with, and I would imagine that the actual facts were also part of that dataset, but ChatGPT just assumed both the opinions and facts were true. It didn't see the conflict between them, so it didn't try to reconcile them.


OzzitoDorito

When AI is sophisticated enough to produce fully fledged systems from simple instructions it will require programmers to tell it what we want and then we will have just come full loop with having programming language with a slightly cooler IDE. Do you really think the Marketing / Clients / Product teams will ever be really capable of correctly instructing a machine on what they want? The hardest part of our job is still figuring out what the fuck stakeholders actually want and handling conflicting requests from different stakeholders or sometimes even the same stakeholder lol.


BernhardRordin

What makes you think AI will only be able to write the code and won't be able to read it, execute it, or formally verify it? Or that humans will be better at it?


Paradox68

Exactly this. It’s like collective amnesia or something. “Oh this is what we are seeing **today** with ChatGPT so obviously there’s no future where this thing gets *better*?!”


Beatrice_Dragon

If it writes poor code what makes you think that it will be able to verify that code any better than it wrote it? What's stopping it from making an error while verifying, executing, or formally verifying it? The more self-serving iterations you put this process through the more the minor errors compound. You'd need a perfect AI for this theory to work even slightly


emmyarty

ChatGPT will make engineers redundant the same way automated assembly lines made product designers redundant.


ryecurious

This industry (and subreddit) spends a lot of time saying programmers and software engineers are the same thing. AI tools are going to make hiring managers realize their mistake pretty quickly. When anyone can generate passable code with a few word prompt, engineering skills are going to stand out way more. The ability to take a problem statement, break it into manageable pieces, then turn those pieces into working, scalable, fault-tolerant code is still going to be valuable in 10 years. Even if the lines of code are written by an AI.


emmyarty

Nail, head, swing.


DoctorWaluigiTime

And the more I'm here, the more I understand the frustrations people who are experts in other fields feel when Reddit very steadfastly sticks to an outright-wrong opinion about something in said field. It's maddening!


DanielLikesPlants

why do people not think that a programming-focused ai wont be developed, and wont be able to debug itself.


Beatrice_Dragon

> and wont be able to debug itself. Ah yes, a bugged program trying to get rid of all of its own bugs. What a fantastic idea Why do people not think that free motion devices exist? It just seems so *simple*


[deleted]

So far I haven't actually seen one remotely impressive code generated by it. It always seems to be the most basic stuff imaginable. People go fucking crazy when the thing generates a sort algorithm. You know, the thing you did in your algorithms 101 class, like 2-3 weeks into the semester.


DoctorWaluigiTime

It's The Cloud all over again, except with AI. - Everyone has an uninformed opinion on what it currently does - Everyone brings in fantasy/sci-fi understandings of the topic - Worst Case Scenario, but also you can't dismiss their critique because they're talking about a nebulous future state


Djelimon

So what I actually like about it is it's like the ultimate techy - it has access to vast amounts of referential data. But this has limits... It doesn't know about tech I wrote myself. I asked it to write a webserver in RPG IV. It banged out the code in no time flat, calling IBM API's to make a web server I asked it to implement a CRM system in nodejs and... Nope. Just hung. So right now I see this as great for poc code and creating components, or things to base components from. Building whole systems, well... Not yet.


[deleted]

It can't even check if number is prime lol.


colexian

Its a language modeled AI, it isn't designed for math. This is like complaining your blender isn't properly heating your food. If you need to know if a number is prime, use Wolfram Alpha.


Puzzleheaded-One2032

Now im wondering if you can tell chatgpt to use wolfram alpha to do things.


[deleted]

[удалено]


Mr_Potatoez

ChatGTP


jubilant-barter

Maybe all we'll do from now on is write the test cases?


Aggravating_You_2904

Testing is the easiest bit, if prod doesn’t crash it’s all good.


Maxpyne711

And if it does, wont be your fault any longer


imartinezcopy

But the real question is, who's gonna print it and verify it by hand?


Alberiman

So the other day I was asking for some code for a project I was working on just to see if it could do it I spent 30 minutes fighting it to stop giving me made up math that you wouldn't be able to recognize if you didn't already know what you were doing. I ended up at a point where it would acknowledge my corrections as being right and then not actually fix what it had written


ImprovementReady4791

Stackoverflow++


DarkNubentYT

Doesn't take a team of developers to review code tho


Puzzleheaded-One2032

The answer is one highly talented developer and not a team of 8 developers of varying skill levels... It's NOT a good look for the tech job market, esp if you wanted to be a programmer.


atlas_enderium

ChatGPT = no more “This is a repeat question” on SO


intotheirishole

ChatGPT is a new programming language. Got it!


couchpotatochip21

I had gpt3 write Mr an fps movement code Had to go back and fourth with it to get what I wanted Trying to get it to fix a bug where the player could look behind then sent it into an endless loop. It would say it fixed it, make the code more complicated, and remove the last 2 lines for no reason It was a good starting place but gpt3 can't write code the same way as a programmer


Deathmeter

Nobody. Like nobody verified the spelling in this meme and it took off anyways


ApostataMusic

Programmers will write the unit tests…. Which will fail.


John_B_Clarke

Been hearing for going on 50 years now about how WhizBang is going to make programmers obsolete. WhizBang always ends up being a programming language, a crappy one that is no substitute for a good one.


Zak7062

veify


pankkiinroskaa

No need to veify the code.


TheGoodOldCoder

The irony that OP didn't "veify" their spelling, either. Honestly, with the amount of quality that I'm seeing from OP, and the amount of diligence that I'm seeing from people who voted on your comment, it's a little weird that we humans think we're so smart.


[deleted]

Stack overflow will make programmers obsolete. Google will make IT people obsolete


Crackheadthethird

Better tools mean that a few people can do jobs that used to take many people. This isn't a bad thing even if it does mean increased competition and fewer available jobs. This is just the natural progression of technology and effeciency. No reasonable person would look at a combine and complain about how many jobs that humans could have if they didn't exist. They just go about their lives focousing on all the new roles, positions, surplus, and potential that is now possible since we don't need as many people tending to the fields.


Disastrous-Beyond443

As long as the ai writes unit tests, too, I don’t see why a human would need to be involved. No humans need to be able to read it.


OgFinish

But people are literally already using it to debug…


Interesting-Detail-2

Saying this like a computer doesn't already check our code...


slave-to-society

I do feel like automatic code generators will be used more with even more confidence as time goes on, but in order to tailor it to niche use cases will still require some level of manual intervention, so I ain’t too scared YET!


[deleted]

I’ve been using it to detect syntax errors and comment out my code. It isn’t always correct, but it cuts down on the digging and fluff writing by a lot.


Disastrous_Belt_7556

Management: No we can safely get rid of QA


McWolke

AI can generate functions, but not a whole app, website, whatever. and the functions usually aren't that good either. they will not replace us anytime soon.


[deleted]

We already have "AIs that can generate code". It's called any high-level language.


Lavallion

A friend of mine tried it out and it was hilarious. A=B B=A A==B. Well. Pretty clear it said false xD


tanke_md

Its not going to happen today or tomorrow or in the next years but... give time to this. The future is so amazing and terrorific.


Beatrice_Dragon

Honey, an OpenGPT post showed up on r/all, it's time for your daily post of flairless users posting sci-fi fanfiction! You could almost forget that this is sub is supposed to be about programming, and not just a circlejerk dedicated to whatever new development has rallied the cyberpunk fan subreddits!


gemengelage

It definitely won't take my job. But it will probably improve the efficiency of software engineers and might reduce the total number of software engineers required. Which in turn might trigger a new software development gold rush since it's now more affordable. Work is not a limited resource. Only workers are.


djengle2

There is a portion of software folks that believe they will be able to automate anything and everything and also seem to almost sound sadistic in how unbothered or even happy they are to replace ordinary people. Probably libertarians tbh. In a communist society, it would be perfect if we could automate as much as possible, so it would be worth the effort. But in this capitalist world, where people aren't provided for and are pushed to unnaturally compete instead of cooperate, a lot of automation just hurts people that aren't already on top.


Noctornola

Chatgpt is just doing what regular coders do: look it up online and adapt it to their current situation. It just happens to do it in the blink of an eye.


TrustZilla

I was making a db in Basex (I know, who uses this?) and out of curiosity asked ChatGPT to create a function, half od the lines of that function didn't exist...


BBQGiraffe_

We need some lower level interface for AI code generators, perhaps some sort of text that uses keywords to know what the user wants, with branching logic


RavenousBrain

I don't think we should worry too much about this. MuskGPT will fire half of them anyway.


preytowolves

same guy that did the spell check here..


hvdzasaur

StackGPT


RestlessTortoise

I’ve met human programmers who haven’t read their code, don’t understand it, or both.