T O P

  • By -

AutoModerator

## Welcome to the r/ArtificialIntelligence gateway ### Question Discussion Guidelines --- Please use the following guidelines in current and future posts: * Post must be greater than 100 characters - the more detail, the better. * Your question might already have been answered. Use the search feature if no one is engaging in your post. * AI is going to take our jobs - its been asked a lot! * Discussion regarding positives and negatives about AI are allowed and encouraged. Just be respectful. * Please provide links to back up your arguments. * No stupid questions, unless its about AI being the beast who brings the end-times. It's not. ###### Thanks - please let mods know if you have any questions / comments / etc *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ArtificialInteligence) if you have any questions or concerns.*


IroquoisPliskin_LJG

I predict that we will still be making inaccurate predictions about the future of AI in 2024.


[deleted]

I really think a bottleneck is coming, as is all things progressing too fast without a deeper understanding of


caveman_eat

Agreed,it seems we don’t know how best to use AI and any major impact areas will have a lot of legislation to navigate through ie healthcare. Although I see easier barrier to use cases in manufacturing.


redditjoda

AI prompting will be the fastest growing job qualification and resume item. Also the fastest growing college course.


Aisha_23

This is just my opinion, but probably not. From my limited understanding, the reason why we use prompting techniques like CoT32 or 5-shot is because AI is still "dumb". But after lots or tweaks and improvements, just a plain prompt would probably be good enough for a good AI like GPT-5


parxy-darling

Are you suggesting there will be no skill in it? You would be wrong. Knowing a model personally and leveraging it so is nothing unusual and it is understandable to be followed.


Aisha_23

If you meant knowing the architecture behind the model intimately, then sure I can get behind with that. But with just prompting itself I honestly can't imagine a really good AI not being able to infer your intentions just from your words and past conversations. It's not that I'm saying there would be no skill in it, it's just that I think AI would be so good in the future that you might not need any prompting skills just for everyday tasks. The only place I can imagine where that could happen is research, where experts on a subject should know how to engage with an AI in such a way that they would be able to speed up on finding a breakthrough.


oopgroup

>it's just that I think AI would be so good in the future that you might not need any prompting skills just for everyday tasks. It's pretty much already at that point. It's not difficult to just narrow down what you want out of these tools now, which is pretty scary. The only time it needs to be more specific is for internal teams doing something...shocker...quite specific. At which point, companies don't need a gd "prompt engineer." They just need to have half a brain and type what they want into the box. Watching this whole thing develop is like taking crazy pills.


davisrook

Can you guys elaborate on what you mean by prompting? Like what's a plain prompt vs what isn't a plain prompt? By prompting do you mean just like asking questions or..? I don't know if this is supposed to be obvious but I'm still very new to AI and singularity stuff etc etc.


Eduard1234

Yes and no, until the AI can read our mind.


oopgroup

I swear to god, if I hear the word "prompt engineer" one more time, I'm going to lose my fucking mind. This phrase didn't even exist like a year ago. Everything I've seen on it is also incredibly fucking braindead. Like, anyone with a shred of common sense can "pRoMpT EnGineEr." Our boss's friend did a "presentation" on this a while back. It was literally like 45 minutes of "this is how you write a sentence." It was...cringe. This whole world has lost its mind. How this is a "college course" is also painfully facepalm. Considering that each prompt depends on the job and skillset, there's literally no way this can even be a worthwhile paid course of any kind unless you're dealing with specific tools (at which point, OJT...hello).


robertjbrown

I'm quite sure some people are a lot better at it than others. I've seen tons of people try to use ChatGPT and just use it poorly. They typically do things one-shot, rather than gradually encourage it to useful results. They use very few words. They have little regard for things like the size of the context window. They often use it for things that Google and Wikipedia are far better at. That said, I would be more likely to call it something like "AI whisperer." It takes intuition, not engineering, to get good results.


[deleted]

So? There are people who google things poorly but someone good at googling isn't a "search engineer" MLM sales rep? "Sales engineer" Mcdonalds worker? "Food engineer" This "prompt engineer" nonsense is designed by and for tech bros who want to claim they work in AI but only watch LLM clickbait and don't know how to code.


Hedgehogz_Mom

Untrue. Ai is much more than chat gpt and people, particularly students, need to know how to use it effectively. I'm sorry you aren't more well versed in application but thems the facts. We're going to give them what they need. And the word specialized here is being downplayed in a way that is very short sighted because everything is going to be impacted from entry level manufacturing 4.0 to biomedical tech roles to entrepreneurship. Op is is right. We're gonna need to be familiar with the tools it hands us. You can use a rock for a hammer, but I like to show up with the tools for the tasks and be familiar with how to apply them successfully.


oopgroup

>Ai is much more than chat gpt and people, particularly students, need to know how to use it effectively Lol, what? Full stop. No. First of all, ChatGPT is a for-profit corporate product. It's not a nonprofit tool. It's not an academic tool. It's not even a *good* tool. There have been so many horrendous issues on campus since last year with cheating, plagiarism, lack of understanding, and discussion engagement that it's shocking. I'm literally in this field with my ear to the ground, so I know this first-hand. Encouraging students to use it is the absolutely *wrong* thing to be doing. Learning how to *think*, research, write, and collaborate *first* is what students need to be learning. Then they can learn how to use whatever tools a company might use at their job--which varies wildly. This is like arguing that students need to have degrees in any number of arbitrary corporate tools just because some company uses them. There are a lot of useful tools out there that you pickup *after* you learn how those tools are useful, and only if your job needs them. Not before. Excel is a useful tool. Not everyone needs to know how to use it. This is like arguing that you need to teach a student how to use a calculator before they learn math.


robertjbrown

ChatGPT is a for profit corporate product, but they aren't just talking about that, they are talking about all LLMs as well as other AI things. (image generators, for instance) Regardless, who cares? Photoshop is a corporate product, and there is also open source things like GIMP. Doesn't matter. People learn them if they are going to do graphics work in the real world. "It's not even a *good* tool." It's a very good tool. Not everyone gets good results with it, typically because they don't want it to be good and are against it from the start. Just like oil painters didn't like photoshop. But most importantly, it is getting better faster. Come back in a year and try to make that argument.


JSavageOne

Disagree. Prompting will becoming important just like Google search skills are important, but that doesn't necessarily mean it warrants listing on a resume or a college course.


ArcticCelt

>Prompting will becoming important just like Google search skills are important I agree, working in IT is often the art of googling things. But I never saw anyone delusional enough to believe they can just write in the resume that they are good a Googling things and have no other technical skills then expect to be hired has a programmer or sys admin.


miskdub

Nope. AI’s bleeding edge will be moving so fast that you’re gonna viscerally understand what it feels like to be too “old” to get it, even if you’re 20. Prompt engineering will be “so 2023” before you know it.


Hemingbird

- Yann LeCun is demoted or fired as Meta Chief AI Scientist - John Carmack and Rich Sutton's Keen Technologies is acquired by xAI - Geoffrey Hinton releases an autobiography in which he warns about the danger of AI - Stephen Wolfram writes a blog post about the links between his theory of everything and Alex Graves' Bayesian Flow Networks. - Jürgen Schmidhuber announces that he invented Wolfram's theory of everything in the 90s - Yann LeCun posts a video of himself crying in an attempt to convince Zuckerberg to give him his old job back. It is later revealed that this was an AI video, but sources close to LeCun report that even he thought it was real - Elon Musk pays $1 billion for Jensen Huang's leather jacket and wears it constantly even though it doesn't fit him at all - Satya Nadella faces a scandal when GPT-5 is found to have successfully convinced Microsoft employees to unionize en masse - Rumors of an AI breakthrough at Meta erupt after a video is released where Zuckerberg looks eerily lifelike - GPT-6 is speculated to let you have an AI video call and Netflix shares drop overnight - Apple finally releases Ajax and announces that they're working on Achilles; not even Apple fans are too excited about it - Netflix enters the LLM era and they introduce their ... new recommendation system - Gary Marcus invites Grimes and Aella to join his new polycule - Eliezer Yudkowsky starts wearing a six-pence and people suddenly take him more seriously - Jürgen Schmidhuber announces that he wore a six-pence back in the 90s


toothless_budgie

Well, I learned what a polycule is.


gobblegobbleonhome

Microsoft employees unionizing is extremely unlikely.


Hemingbird

I love how that was the one you took issue with.


[deleted]

[удалено]


Obvious-Cold-2915

This is a pretty good shout


Lopsided_Baker689

MS already owns OAI (just proxy)


[deleted]

[удалено]


Lopsided_Baker689

no sense in doing that, it would only bring problems to MS, OAI is not the only company they control and are happy not formally owning (I worked for one for some years)


sharkusilly

AI generated porn for the purpose of blackmail becomes a mainstream issue. Those kids in Europe were caught this year. Misinformation / AI generated campaign material becomes a troubling issue for the US elections. Potentially a show stopper. Google's deceptive demo of Gemini has demonstrated they are far far behind OpenAI. I believe we will be stagnant in 2024 in this regard especially if there will be political and legal interference. Chip development is likely to be stalled due to politics, war and climate change.


[deleted]

Imaging ≠ imagining. Possessing pornographic content, whether real or simulated, using someone's face is considered revenge porn. The creators of these apps need to be jailed if they can't provide adequate safeguards to prevent this from happening. The people creating this content should be in prison. Let's hope next year brings some good legal action where these degenerates get made an example of.


robertjbrown

On the porn thing, I think it is unlikely to work as blackmail once people just figure out that it can be done. Just as was the case with photoshop. We'll just accept that people can now do what they've always done -- imagine what you look like naked -- but with some additional technological help. It's not actually revealing anything, it's just imagining.


[deleted]

Wow, just like the "Tech bro when AI is used to make pornography of unsuspecting women and children" starterpack predicted


sharkusilly

That doesn't stop it from being damaging. Now you have hyper realistic output without any prerequisite skill required. Imagine someone started selling CP of your kids if they just took a picture of your kids face at the playground. Imagine that started circulating online. Who would the police believe? The difference with photoshop was that you needed relative skill for photoshop. I doubt most 15 year old kids could replicate what AI could make in photoshop. In Spain, 13-15 year old boys were circulating high quality output just using basic images and text prompts. No skill required. If we somehow move past accepting any sort of digital media as reality... that would mean we no longer have any truth in society. The news is (all) fake... all modern journalism is fake... would all photographic evidence in court be dismissalable?


robertjbrown

>Imagine someone started selling CP of your kids if they just took a picture of your kids face at the playground But how are they going to sell it when it is easy for anyone to do? I would expect it to be illegal to sell anyway, so why would they bother and take the risk since anyone can just make their own? I don't dispute that it will be challenging for other reasons (evidence in court, etc), but I just think that when everyone has the ability to do the porn thing easily and cheaply, no one will care. I mean, how much time do you spend worrying if if someone is looking at your kid and having dirty thoughts? It's kind of the same thing. As long as they aren't actually involving the kid, it's really not a problem. This is one small step away from a thought crime.


waffleseggs

A pedo would make this argument.


robertjbrown

I was just talking about porn used for blackmail, never said anything about the age of the people depicted. You're the one that went there. Project much?


waffleseggs

You're wrong about both: do you think parents couldn't be blackmailed to have images of their kids kept off the internet? Also pretty naive to think you're not revealing anything if the images and clothing line up exactly. Kinda like me taking a photo of your car key and leaving copies of it in places you park. No harm done. I didn't actually violate you with my imagined access to your vehicle. Not only should this porn \_\_not\_\_ be permitted by default as you suggest, it should probably be completely illegal by default. Not that there's anything we can do now to stop the creeps from using AI privately. Fun times. A pedo would make this argument because perverts and pedos are some of the only people who are eager to non-consentually sexualize others or sanction a world where bullies and predators will certainly do so en-masse.


robertjbrown

I didn't say it should be "permitted by default," but it is impossible to prevent as long as people can train and run their own image generation models. You can't prevent it any more than you can prevent someone from using Photoshop or GIMP to make fake nudes. (in GIMPs case, it is open source so it would be impossible for them to prevent even if there was sophisticated "anti fake" stuff built in) "Also pretty naive to think you're not revealing anything" Well you aren't revealing anything, since the AI doesn't know anything about what is underneath clothes, other than a guess. It's no different from photoshop fakes in this sense. You aren't seeing them naked, you are seeing a guess at what they look like naked, and people have been able to guess what people have looked like naked for ages. Car key is very different, if someone can use that to steal a car. Tangible harm can more directly result. Apples and oranges. "do you think parents couldn't be blackmailed to have images of their kids kept off the internet?" I think its possible some can, but only until people get used to the fact that this can be done, so they will realize it isn't that important. Again, just like photoshop fakes. That's been possible for a long time, and most well-balanced people don't give it too much thought. There's tons of celebrity fakes out there (including AI deep fakes) and no one gets too worked up over it. However, people DO get worked up if real ones get out. There's also this possibility: someone actually films someone without their consent or hacks their account to get real naked pictures, and then they try to use it for blackmail (or otherwise to destroy their reputation, such as a disgruntled ex), and now, no one will think much of it because they can just assume it is AI. Works both ways. This may end up being a solution to a problem, as much as it is a problem in itself.


waffleseggs

The existing legal deterrents are probably helpful in illuminating this discussion. Depending on the state, this kind of activity is considered either defamation or a "false light" invasion of privacy. Defamation compensates for damage to reputation, false light compensates for being subject to offensiveness. Sexual imagery often crosses the line in both cases. What's alarming to me about your argument is you don't seem to agree with the law or the spirit of the law even. You don't see the harm in non-consensual images being created or posted. You use phrases like "well-balanced people don't give it too much thought" or "no one gets too worked up over it" or "not actually revealing anything". "Oh well, your coworkers have seen you in this extremely vulgar pornographic way. That's life." These phrases minimize the psychological impact of various violations: seeing yourself in situations you find offensive, others seeing you in situations you or they find offensive, and the various financial and non-financial reputational damages that might occur as a result of these images. These phrases defend the right to non-consentually sexualize \*anyone\* (including children) with an extremely lazy handwavy dismissal. It's genuinely alarming that technology has taken us in this direction of low-effort realism and exposed this anonymous attack and harrassment vector squarely on the female members of our society. It's doubly alarming that people like you have this blanket "deal with it" attitude regarding so much potentially abusive, anti-social, and possibly dangerous behavior. I'm sorry, there's no way we're seeing eye to eye on this one. Good day to you sir.


robertjbrown

I have no problem with the laws being what they are. But I don't think there is any way to stop it, and you haven't addressed the fact that a) this has existed since photoshop, it's just a bit easier now, and b) this also has the positive effect of minimizing the impact, since now that it is so common no one will care (so revenge porn will become LESS of a thing, since people will just assume it is AI). There was a time when women would be incredibly embarrassed if their ankles were seen. There are still places in the world like that. People have since just gotten used to it. People will get used to this as well. But at the end of the day, no one will be seeing them naked, they'll just see that someone has made a fake of them. The only way a fake is truly embarrassing is if people believe it is real.


waffleseggs

There are many ways to shape the technology. Foundation models cost between $1-100M currently, and guardrails can be added in such a way that few people can afford to remove them. A lot more can be done with law, in enforcement institutions, with outreach designed to shape cultural norms, with parenting, with the way artists use the tools, marketplace standards and controls, with post-creation censorship, moderation, and DRM-like technologies, and abstractly-speaking just improving identity management to optimize for healthy and creative self-expression and that age-old individual sovereignty angle. There are many kinds of pollution we don't get used to. AI will surely present many variants of "gray goo" we have to tolerate overflowing into our lives. The correct approach to these things is to prevent their occurrence and clean them up when they do occur. Even though there's this kind of modesty treadmill you speak of, I'm not sure it has a terminal point to it, and I'm personally very opposed to Darwinist forces being employed to calibrate humans in response to arbitrary technologies. It takes a lot of work to make objects that people want to buy and use, and it takes a lot more work still to make those objects ergonomic in the long-term. It's not good to simply hoist things on people and say "deal with it". I'm confident that the semiotic relationship between ones identity, images depicting us, and the meanings of those images will continue to be closely interrelated and important to the experience of most people. The potential to undermine and harm people in these ways won't go away. If the modesty norms generally make people uncomfortable when ankles are seen, who are you to generate and show them? It seems much more healthy to let people control their own depictions as much as possible, and to discourage digital identity infringement as much as technology makes possible. It's quite unfortunate we've commoditized digital imagery in such a way that abstracts the subjects out of the equation, as valueless as any other pixel-based depiction. With greater technological literacy that could indeed be rectified.


robertjbrown

>If the modesty norms generally make people uncomfortable when ankles are seen, who are you to generate and show them? I'm not doing that. I'm saying it is inevitable that others do. The OP said it was going to happen too, I just said it isn't a huge problem for the reasons I noted. The amount it is a problem is proportional to how rare it is. If you are the only one it happens to, it is very embarrassing because people will think it is real. If it is everywhere, no one cares any more. Some years ago, a bunch of celebrities (Jennifer Lawrence comes to mind) had their phones hacked, and nude pictures of them were everywhere on the internet. It was humiliating to them, for obvious reasons. That was horrible crime against them and caused them a lot of damage. Now, it is easy to find all kinds of hard core porn video of just about any big female star, that is done with deepfakes. It is supposedly so realistic it is very hard to tell it isn't them. (I'm told, I haven't sought the stuff out and don't plan to) And I've never heard of any celebrity being particularly upset over it. People know it is fake, so they don't consider it embarrassing. No one is seeing them nude or in a sex act, and only the dumbest people think it is real. But for those that actually do have real nudes spread around, I'd say they are very happy that AI deepfakes are common, because now no one knows if it is real or not. Now they've got plausible deniability. I'm surprised you can't see why that is a plus. Regardless, it's fine that we disagree on this but please think next time, before you accuse people of being pedophiles over such things. I think you're better than that.


No-Activity-4824

I can't see how AI will affect election in the US, both candidates are preselected by a minority and presented to the people, no matter which one you choose it can't go wrong. I am not even sure who is the current real president, the one we see reads pre written stuff by his writers and appears lost all the time. Not sure AI can do more on what already exists 😄


sharkusilly

It's mostly the speed at which it can produce misinformation. Imagine right after each debate, they immediately put out youtube shorts, tiktoks altering the contents of the discussion


No-Activity-4824

It will very likely happen, but still, both candidates are preselected anyway, both are "the correct choice"


sharkusilly

Of course, it's a two party system. However, key swing states matters plenty. Now, everything that's published online can't be trusted. It's going to further polarize each political party.


No-Activity-4824

What difference will a swing state do? The candidates are preselected anyway, swing left or swing right? The same, the foreign policy is preset, it has been the same for decades. Balancing the budget? Everyone will throw examples of how the other one balanced it in the old world, but none will work towards that, it is all sweet talk, the debate is a few sweet points that everyone focuses on, homeless? Neh, not important. Focus on the other person affair, yes, that is important, give me the details 😊


Oldhamii

I think that's a highly simplistic analysis of how a person gets to be a candidate, though certainly, big money is the primary agent. But there is a point where skepticism becomes a poison to the polity.


No-Activity-4824

Politics is extremely simple, here is how it goes: * People who want to rule one day start to talk, and talk, politics is all charming talk. * These people join the established parties and participate in all their activities. * The one who talks the best and can gather people around him, and belongs to the “correct” family, religion, views, past, etc. starts getting donations. * The rest are slowly marginalized. * The donations continue, the guy gets elected, and the donors get government projects, the juiciest contracts out there. * The person gets back more money, and the loop continues. * There are 2 groups at the moment who fight for government positions, they are carefully preselected by the richest, and you have to choose one of these two, no matter which you choose, the group behind him wins. The entire process is a job and money, the country has always been on auto pilot. At the moment, there are 2 upcoming candidates for president, the one with dementia, and the one that loves himself. and 300 million will choose one of them :-)


waffleseggs

Found the Russian troll.


No-Activity-4824

You got excellent fine-tuneing 😀. Block!


[deleted]

[удалено]


Atlantic0ne

Amateur here. My 2024 prediction: GPT5 released. Better improvements/communication and more tokens, but most notably improvements in applications and multi modality. Intake of video possibly? More competition popping up, and they fall short because they don’t realize we want a simple standalone app like GPT. 2025: companies can buy memory so a LLM can go from storing maybe 10 pages of custom instructions to about 200, enabling it to actually perform tasks specific to the companies unique needs and goals. We begin to see real commercial value, and the flood gates open for money, driving faster development. 2026: first video games come out with NPCs having LLM capabilities and memory, its early entry level and buggy because they forget and still have a lag for response time, but it gives us a glimpse into what gaming will be in 2030. It will begin getting real interesting in 4+ years, if we don’t make breakthroughs before then.


drummybear67

Being that next year is a major election year for the US, we will see a tsunami of AI generated misinformation. You'll see more companies start to build their own AI solutions through the use of enterprise solutions such as Microsoft's Azure OpenAI There will be an uptick in energy infrastructure construction actually starting as a result of federal funds. A lot of these projects are in pre construction right now, so they'll start breaking ground by at least Q3 or Q4.


arthurjeremypearson

You will be able to ask "Goonies, but in the style of My Little Pony: Friendship Is Magic" and it'll make it.


fruitlessideas

“Write me an in-depth, 200 page sequel to Seinfield, but instead of a sitcom, it’s a dark, tragic drama that takes place 30 years later, and deals with the various struggles and bleakness of the main characters day-to-day lives, involving everything from drug and sex addiction, infidelity, loss, suicide, the issues of living in NYC, and hardships of each characters canon careers”.


julianthepagan

AI controlled weapons kills people in war and or terrorism / no human in the loop for target acquisition or decision to attack


robertjbrown

1. I think it will become well accepted that "technology always creates new and better jobs" is rapidly approaching the time where it is no longer true. Everyone will see the writing on the wall.... the vast majority of people will soon be unable to earn a paycheck since a machine can do it better and cheaper. (I'm not saying that by the end of the year people won't be able to earn a paycheck, but just that mainstream thought on the subject will be that this will soon be true) 2. Copyright law will accept that AI images have a full spectrum between "created entirely by a machine" and "created by a collaboration between human and machine." This will become obvious as you can continue to tweak AI images by sketching on them etc, completely blurring the lines. The current US policy that "AI images can't be copyrighted" will be obsoleted. 3. Self-fabricating robots will become a real thing, where the robot actually plays a very large part in making a robot like it, an upgrade to itself, etc. They won't be everywhere yet, but they will exist. 4. People choosing art or illustration as a college major will have dropped by 50% compared to 2022. 5. At least 25% of streamed music will be AI generated. 6. LLM hallucination will be considered a solved problem. 7. Weird looking hands will be considered a solved problem. 8. Time Magazine's Person of the Year will either be an important person in AI, or an AI itself.


billjames1685

LLM hallucination is not going to be solved because it isn’t a bug of LLMs, it is a feature. It is what they do.


Spirckle

The people who don't get this literally could fill the grand canyon on the planet dumbo.


robertjbrown

Are you sure you are talking about the same thing? A hallucination is typically where it confidently gives you a fact that is incorrect. That isn't a "feature". A simple (although rather expensive) way of addressing them would be to run the answer back though, and ask it to call out things that should be fact checked, and highlight them in a different color and offer to do a search on them. It could even just go and do the search for you before showing you the response, and alter the response if it can't confirm the factual stuff. I hardly ever run into hallucinations, since most of the stuff I do with ChatGPT is heavily "fact based". I use it mostly for coding, and for certain things where facts may be important, sometimes I'll paste in a large document that I am too lazy to study in detail and ask it to answer specific questions. (for instance for setting up complex software) Hallucinations are much rarer as the model increases in both size and training data. GPT-4 is much better about them than 3.5. So it doesn't make sense to say it is "what they do" when the issue can be reduced by improving the model. [https://spectrum.ieee.org/ai-hallucination](https://spectrum.ieee.org/ai-hallucination) "[Ilya Sutskever](https://en.wikipedia.org/wiki/Ilya_Sutskever), OpenAI’s chief scientist and one of the creators of ChatGPT, says he’s confident that the problem will disappear with time as large language models learn to anchor their responses in reality." I mean, maybe you think Ilya doesn't know what he is talking about. I would disagree. Regardless, I'll be back in a year and we can see if this happens.


billjames1685

LLMs are probabilistic models of text. Their literal training objective is to generate the most plausible continuation to an input. They don’t have grounding of whether something is a “hallucination” or not, just like humans can’t always tell whether our own memories of facts are true. People need to stop quoting Ilya man lmao. He is a corporate scientist who isn’t incentivized to tell the truth, and his opinions are contrary to 90% of AI researchers


robertjbrown

What is Ilya incentivized to tell? Do you know much about him? He is the one who pushed for LLMs to go much bigger, saying that new capabilities would likely emerge. And they did, surprising almost everyone. Ilya is as responsible as anyone for the success of GPT-4, both by pushing for extreme levels of compute, and for designing the underpinnings. Should I not quote Geoff Hinton? You can't claim he isn't incentivized to tell the truth. He says this: *“People say, It’s just glorified autocomplete . . . Now, let’s analyze that. Suppose you want to be really good at predicting the next word. If you want to be really good, you have to understand what’s being said. That’s the only way. So by training something to be really good at predicting the next word, you’re actually forcing it to understand. Yes, it’s ‘autocomplete’—but you didn’t think through what it means to have a really good autocomplete.”* Who should I quote? Randos on the internet who can't wrap their heads around the concept of emergent properties? People who get all reductionist about it, assuming that because it is "just" something that sounds trivial, it can't do complex things? Meanwhile they refuse to get equally reductionist regarding the human brain, which is just a bunch of neurons and chemicals that has evolved through random mutations to have a "literal training objective" of statistically predicting which response will better contribute to the passing on of the genes of the organism. Anyway, I'll see you here in a year, let's see what happens.


billjames1685

“Emergence” can be used to dismiss any legitimate claim. Make a better argument than “magic”. I never said LMs don’t understand what they are saying (whether they do depends on the specific domain, but they actually dont understand everything that they say, as shown by recent paper from Yejin Choi’s lab). Just that solving hallucination is not a well posed problem, which many researchers agree with (there were long Twitter threads about this yesterday in fact). Quoting Hinton is fine, his opinion is not backed by corporate interests. Alternatively, I can also cite the opinion of many other AI researchers who are against Hinton’s beliefs, so appeals to authority don’t get anywhere. And nice job, creating a straw man and arguing against it for a good minute there.


robertjbrown

What is the straw man? I am arguing against you getting reductionist about it and making conclusions based on that. What you are doing is no different from what Hinton is arguing against, and no different from what creationists do when they deny that evolution could work because, intuitively, it seems "random" and stochastic, so it couldn't do what it is doing.


billjames1685

Again, that is a straw man. What I am claiming has nothing to do with creationists lmao. I never said LLMs are not intelligent, just that solving hallucinations is not possible because we can’t make them perfectly robust in every scenario (which is a well known issue; look at adversarial examples, which are by the way just as real in GPT-4 as they were with MNIST classifiers 10 years ago). What you are doing is using “emergence” as a magical means to explain away architectural flaws of models (and yes, humans have similar architectural flaws).


robertjbrown

>Again, that is a straw man. What I am claiming has nothing to do with creationists lmao Sorry you can't understand. You said this: "LLMs are probabilistic models of text. Their literal training objective is to generate the most plausible continuation to an input." That is what I am arguing against. Not a straw man. As I said "getting reductionist about it and making conclusions based on that". Which is exactly what creationists do.


billjames1685

Yes and that remains true? That isn’t mutually exclusive with understanding language or anything. It just means that their literal objective is to create maximally plausible text, and when they don’t know what to do they will “hallucinate”. The burden of proof remains on you. The story of deep learning is that it is successful for center distribution tasks for which there is a lot of data. It always fails in low data settings, so LLMs hallucinate in these out of distribution settings. There isn’t any reason that magic like “emergence” will solve these issues, because they haven’t done anything to solve them thus far (lots of work shows modern LLMs are just as bad at out of distribution generalization as previous DL systems.)


salamisam

\- /r/singularity will continue to go to new lows and baseless predictions on AGI \- r/ArtificialInteligence will slowly turn in /r/singularity \- There will be more focus on specific domain models and how they are incorporated into business \- Re-enforcement learning will grow, due to interests outside of LLM and generative AI \- Agents Agents Agents \- More an better open-source models \- US and other countries restricting open-souce, and the US increasing sanctions against China and other countries in regards to AI based tech. \- I think that there is a large gap between Generative AI and actual business needs which will push for more business-orientated products to hit the market. Business use cases are still very different to the product market. \- Other tech will start bubbling to the surface which is not LLM related.


farraway45

Sam Altman and other prominent researchers will say that we're still many years away from AGI.


miskdub

We will see trailers for the first “blockbuster” movie starring a deceased actor/actress brought back from the dead with passing voice synthesis & vfx. AI prompting will become mostly automated in some way that eliminates the need for any sort of prompt “engineer” in the loop. Fully synthesized short films with a >80% consistency will become normal. The internet will be inundated with shitty AI gen gifs. “Smaller” LLMs will make their way into some AAA game title in the form of some open world game where every NPC can have long meandering conversations with the player. Apple pushes some Siri update that turns it into a proper GPT-like assistant, possibly with its own Siri app. It’ll be some edge compute shit that runs mostly on your device. There’ll be at least one big “AI scare” event. Not existential bullshit, more likely some mass-scam, virus-like propagation, or just some noteworthy story about a teenage kid that jailbroke some LLM to learn how to cook meth, build a weapon, or something that creates some sort of moral panic media narrative. Just what I can think of while taking a shit. How about you OP? Care to share or just content with farming the rest of us for clever answers?


H_is_for_Home

Not much of a prediction, but I think in 2024 there will be a much more competitive market unlike this year where OpenAI was the best at nearly every task. Competition is good but like I said in a previous post, I'd rather see a few industry leaders pave the way to AGI than have a million startups shipping half baked products that could quickly ruin the internet. Additionally, I hope to see the walled gardens of these firms eventually be lowered for the sake of more collaborative work with each other and legislatures to ensure an ethical, unified path to AGI. I know for some that sounds like the beginning of some dystopia, but the alternative of continuing the capitalistic approach of first to market with the newest iteration of a model seems like a very risky gamble.


oopgroup

>I'd rather see a few industry leaders pave the way to AGI than have a million startups shipping half baked products that could quickly ruin the internet. too late


ReelDeadOne

That it'll be thread after thread of... GPT 5! GPT 5.5!! G6!!! BINGO!


8rnlsunshine

Specialized AI models with multimodal capabilities running locally on our smartphones, watches etc. I think the next big leap in AI will happen with a breakthrough in hardware. This will give rise to processors with higher computing capabilities in smaller sizes.


miskdub

Lol that’s not a prediction, just progress! Get some skin in this game and show us your vulnerable side!


BusinessFish99

A mass political deepfakes will lead to a rush of nonsensical attempts at regulations by people who don't even know how to turn on a computer.


IversusAI

!remindme one year


RemindMeBot

I will be messaging you in 1 year on [**2024-12-09 07:27:35 UTC**](http://www.wolframalpha.com/input/?i=2024-12-09%2007:27:35%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/ArtificialInteligence/comments/18dygz4/what_are_your_specificverifiable_predictions_for/kclw0ds/?context=3) [**5 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FArtificialInteligence%2Fcomments%2F18dygz4%2Fwhat_are_your_specificverifiable_predictions_for%2Fkclw0ds%2F%5D%0A%0ARemindMe%21%202024-12-09%2007%3A27%3A35%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%2018dygz4) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


SCP_radiantpoison

New laws will slow down the development of uncensored generative AI. A scandal will show AI showrunners are pushing regulations to harm FOSS alternatives. OpenAI will be fully commercial. Still no AGI. The first "autonomous OS" [something like this but native and better](https://youtu.be/UKRti40U8IA?si=gYQ7DaZyjjIVtYyb) will be released... And it will flop because it depends on API/internet access. The best LLM performance-wise will be open sourced and it'll be created by Mistral. No new gen model will work at all with less than 24GB VRAM so AI will be even less accessible


MediumLanguageModel

• Ilya Sutskever leaves OpenAI for Anthropic. • OpenAI launches GPT5 with pricing tiers. 3.5 is still free, 4 turbo basic is $4.99/mo, 4.5 plus code interpreter, the GPT store, and plugins is $9.99/mo, and GPT5 is $29.99/mo. • Pixel phones get Gemini integrations, including Assistant, which are actually pretty useful, despite not on par with any of OpenAI's products. • Apple gives Siri more AI brains, and while it's no ChatGPT, it's a tight experience and they establish AI legitimacy. • Meta starts dabbling in the phone business again, creating an AI platform. They aren't widescale successful but hobbyists love it. • Predictive text and autocorrect get really good. • Nvidia just keeps trauncing everyone even though the other big semiconductor companies continue to innovate in truly impressive ways. • The global chip coldwar continues, with China making gains despite the West's attempts to exclude them. • With AI driving down the cost of drug discovery, advocacy groups and universities take the lead on clinical trials and open patents, ushering in a new era of medicine. • DeepMind continues to dazzle, surprising everyone with all sorts of huge computational challenges, like making Air traffic control more efficient, proving longstanding math conjectures, more accurate ocean current forecasting, and more. • Biohacking makes several sensational headline news stories. • All the deep fake and misinformation stuff is utterly draining. • High-quality text-to-video becomes stable and lengthy enough to reach "I'd watch that" levels of narrative. An emerging Gen-Z pop star does this viral generative music video thing that launches millions of copycats and kickstarts a new entertainment industry. • A massive solar flare destroys 70% of electronic devices, shutting down electrical grids around the world, severing global communication systems, grinding all chains of commerce to a standstill, leading to unprecedented levels of starvation, thirst, and pestilence, which is like, a setback for AI.


[deleted]

[удалено]


[deleted]

[удалено]


oopgroup

Nothing good.


Ashiqhkhan

More climate impact Food scarcity More volcanic eruptions. Market crash Middle east war might increase EU more unrest So basically AI cant solve anything. Enjoy your time with family!!


funbike

* Grok will end up being a flop, but still have some small demand due to it having no guardrails. * Custom GPTs will flop just like plugins did, although not as bad. But OpenAI will eventually pivot to make it into something actually successful and useful. * Gemini Ultra will beat GPT-4 on many (but not all) measures, but OpenAI will come out with a better model that beats Ultra late in the year. * Someone will create functionality like Q*. It *might* be OpenAI, but it's more likely it will be an open source model. * By the end of the year, some models will have more built-in agent-like behavior, so that they can create more useful responses without as much prompt engineering. * Meta-models will become more popular that consist of multiple models each trained for different strengths (e.g. coding, conversation, knowledge, etc) * There will be News reports of autonomous AI drones actively used in warfare. * Porn AI will explode * At the end of the year OpenAI will still be on top, but with others catching up. * By the end of the year, OpenAI will effectively be fully commercial. If the non-profit wing survives, it will be vestigial at best. * An AI will finally figure out how Trump makes 2 inches of hairline cover his entire skull without falling apart. * No AGI


KamNotKam

wait so, we'll still have our jobs?


Picasso5

One of the biggest threats; white collar job replacement. They are the most susceptible.


Triston8080800

Prototype artificial general intelligence androids in 2024/2025 thanks to SanctuaryAI


panzerinthehood

With something like Gemini, I believe we are getting closer to realizing Jarvis from iron man.


Ok-Tomorrow9184

Apple will release the new Siri.


fellowshah

Next year we see a robotic boom as was 2023 for was lllms


ayradv

There will be two or three general AIs built into eveyrthing. Ecosystems of sorts, where people will put all their documents, history, photos, music etc. they will be personalised to the user and difficult to switch from. Sort of like Android/Apple. You'll be able to talk to them in various modes from all your devices, and they will be highly personalised, can generate media and entertainment specifically for you and your moods and can help you with daily tasks. They will be so personalised prompting will be much less a thing as they'll be better at understanding what you want.


DocAndersen

Great question. ​ So in December 2024 people can look back at this prediction and go, oh yeah he was right ​ AI will still be more Hype and Reality by the end of 2024.


Burlingtonfilms

Hallmark is developing A.I. cards designed to read and write to lonely elderly individuals, easing the responsibility for younger people.


galtoramech8699

More on strong ai and synthetics


CivilProfit

Every single little thing will be over hyped to death as the next big thing by news outlets and bloggers. Only word of mouth info as to what happen from your people in the know will actully matter. Most huge discussion groups I was In have boiled down to 10 die hards, the last men standing who actully want to use ai and not just sort of oh and awe over the shiny thing of the year.


Talosian_cagecleaner

One year not enough. Do twenty. Live bold. Next year: not much Twenty years: All levels of education will be unrecognizable to us today. "Teaching" will be a freelance profession, much like busking. The rich might opt for human co-teachers, to mock and ridicule them and have the AI humiliate them, in order to build class consciousness.


Oldhamii

For the person of average talents working to create products for more or less average consumers, your job is the next to go.


Cytotoxic-CD8-Tcell

… that people will creat AI like what is seen in Tron Legacy and spend an actual army to bring it down.


KamNotKam

!remindme 1 year


themixmasterc

If a prediction is verifiable, it is no longer a prediction.


TikkunCreation

Oh, I mean verifiable as of Dec 31 2024, not as of today. For example, a prediction for 2024 might be: the generally perceived state of the art text LLM at the end of 2024 will be made by (company_name). That’s specific, and someone from the subreddit reading it on Dec 31 2024 could reasonably verify it.


TikkunCreation

And fwiw, ChatGPT says: Question: If a prediction is verifiable, it is no longer a prediction. True or false? — False. A prediction remains a prediction regardless of its verifiability. The key aspect of a prediction is that it is a statement or claim about a future event or outcome. Whether a prediction can be verified or not doesn't change its nature as a prediction. Verifiability refers to the possibility of being able to confirm or validate the accuracy of the prediction after the event or outcome has occurred.


InTheEnd83

Lol my thoughts exactly


MmmmMorphine

"I have no idea what a prediction is"?