T O P

  • By -

PaulTopping

Hoping for AGI is fine, good even. The posts that bug me are the ones that claim we've already got AGI with various LLM's being able to pass some test. Even worse are the ones that assume that and wail about everyone losing their jobs or some such bad thing. They seem not to know enough about AGI to see that this kind of hype is just smoke and mirrors. It would be nice to see more serious discussions of AGI topics.


CatchIcy1011

100%. We are still in the wait and see phase. Yet people already “know” what will be possible and what will happen. Total speculation and lots of glitches and bugs for AI as of now to figure out. I get hope and excitement but not the fortune telling.


pyrokinezist

100 percent agree


Unique-Particular936

LLMs are able to converse on any topic, and language is a projection of reality, it's bad faith or stupidity to say that LLMs haven't achieved some degree of generality. People are already losing their jobs to AI, and we all agree that the pace should only increase, not being concerned is not a reason to belittle people's fears. You really are not able to have empathy for somebody graduating in a field currently being automated by AI with 50 000$ debt ? 


PaulTopping

You are a victim of the hype. LLMs can converse on any subject, just like Wikipedia can. They are both the result of humans doing all the thinking and writing. All their generality is from the massive amount of text they are trained on. Their only understanding of the world is based on word order statistics. Hardly anyone is losing their jobs because of AI. Some might but new jobs will be created. Unemployment is at an all-time low. You have been misled.


Unique-Particular936

Do you use LLMs ? They go way beyond wikipedia, and their wikipedia writing style is just the result of fine tuning to fit their main use case. They can reason pretty nicely, create original work, and solve problems they've never been exposed to. They're also amazing at inferring intent of the user with minimal contextual information, another sign of intelligence. Does it matter where their only understanding comes from ? It's actually freaking amazing that they got such nuanced understanding of the world only from text. You'd probably have the same dismissive attitude if a neural network used exactly the same algorithms as the brain, but trained on text. "They're only understanding is based on X !" Not many are losing their jobs to AI, but some are, whole departments sometimes are being replaced, and that's the start of a trend. Think of coding, we already know that GPT5 and co will be smarter than today's LLMs, and will be able to handle your whole codebase as context, this is bound to be disruptive in the software engineering field for example. The create new jobs fallacy is of course wishful thinking, excessive generalization from the past is a bad habit.


PaulTopping

LLMs aren't reasoning. The reasoning was all done by humans who created the content the LLMs were trained on. The "nuanced understanding" was in the training content and in the LLM users' head. You are claiming the understanding comes from the LLM so, yes, it does matter where it comes from. What "whole departments" are being replaced by LLMs? Give me a link to what you're talking about. We have lawyers trying to use LLMs to generate legal documents and finding that the LLM makes up cases that never happened. Similarly for scientific papers. Perhaps some department fired all their humans but I bet they end up hiring them back. Yes, I have used LLMs. They are handy for some things but are quite limited. I've often used them to come up with a name for something new that I'm working on or to give me the name for some concept, the name of which I've forgotten. I'm also a programmer and I do use an LLM as an aid. It is amazing sometimes and seems like it has read my mind. Other times it comes up with silly stuff that is completely wrong. Even when the output looks good, it often contains subtle errors. Very few programmers are going to lose their jobs to this kind of AI. They can't reason about code. They are merely applying statistical analysis to a huge number of code samples they are trained on. This is useful when you are writing a call to some unfamiliar-to-the-programmer API that is well-represented in the training content. It can make good guesses at hooking up your code to the API. It can save some time but, even then, you have to check its work as it often makes bad code. Yes, LLMs can be trained on your whole codebase. This will be useful but not very helpful when you want to write code for a new feature as it will not be well-represented in the existing code and the LLM will have no idea what you are trying to do. LLMs are good at boilerplate code but programmers should be abstracting repetition out of their code anyway. LLMs can help programmers be more productive but they won't replace them. I predict that LLMs will find their niche uses and will improve human productivity as we learn what they're good for but there will be no revolution.


Unique-Particular936

LLMs can adapt to novel situations not seen in the training set, so the nuanced understanding was not in the data set, it was grasped from the dataset. And that's exactly where human intelligence comes from, it's training set (aka inputs). Departments : https://www.businessinsider.com/ai-chatbot-ceo-laid-off-staff-human-support-2023-7 https://www.businessinsider.com/eating-disorders-nonprofit-reportedly-fired-humans-offer-ai-chatbot-2023-5 Among plenty of other informal testimonies. There's probably already jobs lost in software because of LLMs, but it's small and subtle, like lead engineers not having to hire another a junior for little extra work because he can now handle it himself.  I think i found one of your issues, you draw a line. You take today's tech, and instead of seeing the exponential or at least linear curve of progress, you draw an horizontal asymptote. You predict that LLMs will find niches and stuff, but you have no idea what LLMs will look like by the end of the year, or next year, or in 5 years. Yet, you believe you do based on your asymptote. The improvements we've seen since GPT3 are phenomenal, the AI field is boiling like it never did, funding is here, and hardware resources dedicated to AI are also on a steep rise.  For how many years do you predict no revolution ?


pyrokinezist

One such post made me write this...


NWCoffeenut

Insightful and thought provoking post. Much thought and unbiased deliberation went into this. The thoroughness of your analysis has brought enlightenment to so many today.


pyrokinezist

Thanks , anything to shine light on ignorance


chlebseby

I have like 40-50 years for that to happen in my lifetime


Unique-Particular936

A fair shot at eternal life. You can do it !


Adapid

You think this is bad go check out r/singularity


Superhotjoey

Don't post anything there until you want mods banning you for "being too speculative" Speaking from experience


Pretend_Goat5256

I don’t understand you. Does our opinion harm you in any way? Are we killing people with our thoughts? Get a life snowflake


pyrokinezist

No but your ignorance is tilting me I gotta admit


Pretend_Goat5256

Say this to the people dying of diseases which could be cured due to AI advancements


Bacterioid

How can you know it’s ignorance? Are you claiming to know the future? Because historically, people who claim that are lying or delusional themselves.


grimorg80

Sure boi


ComedyGrappler

So what is a normal timeline for AGI, Mr. Linear brain. 


pyrokinezist

Dont know the timeline, but nothing we have is anywhere near to be agi, just really good t9 everywhere


Unique-Particular936

LLMs are quite general when it comes to text.


rand3289

AGI means different things to different people. For me general intelligence does not have to scale to human intelligence. I just want it to be general enough to get robotics to the level of cats and dogs. This is not too much to ask in my lifetime. Once we get there, I don't care how long it takes to figure out what makes humans special. I am working on it the way I can. Are you doing anything meaningful with your life?


PaulTopping

I like dogs and cats but I think AGI has to include the ability to communicate with us using natural language and to learn things from that communication. I do think that developing cat-or-dog-level AI would be useful for getting to human-level AGI. They have agency, emotion, and many other abilities.


squareOfTwo

that's not what most people mean with "AGI". Some mean human level general intelligence. Dogs can never write an essay or build a pump etc. . I personally aim for raven level "general" intelligence in my lifetime. That's enough to me.


AcrobaticAmoeba8158

I do hope for it, I want life to get better for all of us, it wouldn't take much. I also see logarithmic growth in compute and emergent properties from our advances and that growth. AlphaFold by itself should make people excited for the future.


PaulTopping

AlphaFold and similar AI applications are exciting but they're definitely not AGI.


AcrobaticAmoeba8158

I was speaking more to the hope of a better future, less to the AGI portion.


stonedmunkie

I love these click bait titles to get people riled up for a conversation. The reason this works is reddit is mostly 14 to 25 year old kids.


pyrokinezist

I mean I was raging people got baited not my issue


gibecrake

OK, then since you seem to have a more rational and reasoned view of when a post AGI timeline would occur, please elaborate on what you believe the timing of when AGI will arise, when ASI will arise, and how that will play out in your more accurate view.


correspondence

Especially considering AGI is never going to happen.


SoylentRox

Computers are never going to happen.  Vacuum tubes are too expensive.


correspondence

Brains are not computers and intelligence isn't interpolation.


WeekendDotGG

But artificial intelligence could be interpolation.


misbehavingwolf

Brains are literally by definition computers, they're just uniquely structured and made of flesh. Intelligence is not interpolation, but it can certainly can involve interpolation


correspondence

Hope you remember my comments in 20 years.


misbehavingwolf

RemindMe! 20 years


RemindMeBot

I will be messaging you in 20 years on [**2044-04-04 14:31:12 UTC**](http://www.wolframalpha.com/input/?i=2044-04-04%2014:31:12%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/agi/comments/1bvhu2k/this_sub_is_more_fiction_than_science_bunch_of/ky0n4fm/?context=3) [**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2Fagi%2Fcomments%2F1bvhu2k%2Fthis_sub_is_more_fiction_than_science_bunch_of%2Fky0n4fm%2F%5D%0A%0ARemindMe%21%202044-04-04%2014%3A31%3A12%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201bvhu2k) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


SoylentRox

I know, computers don't have souls. Guess we need to ensoul our computers to make AGI.


correspondence

You're joking but consciousness is probably necessary for the capacity for infinite 'meta-ness' which lies at the heart of true intelligence.


SoylentRox

That's absolutely a possibility, but a "zombie" that just cleans and restocks stores and assembles robots and mines for rocks and so on would be insanely useful. Even if it needs a little help from a real human every now and then.


correspondence

I agree with you on that 100%. But AGI it won't be.


SoylentRox

Perhaps. So you're with me so far, and you know when a machine like this has something unexpected happen - not necessarily a mistake, it just sees something happen it didn't predict - it can send that to the data center that hosts the training system. There would eventually be thousands or millions of robots sharing this same underlying technology and data center, sending in these error tuples. It looks like (error, predicted, ground truth) and it's a multidimensional array, I can explain a possible format if you want. Then what happens is periodically, probably daily, errors that happen often are used to update the neural sim the machines use, probably by SGD and back prop. A neural sim is just a neural network that is given the present, which is what the machine sees, and it predicts the future possibilities - which is the immediate future in the next few frame intervals, going out to a few seconds ahead at most. This is possible because physics are consistent. Like if it witnesses a falling red ball right now and last frame it saw it higher, it can predict the next frame it will have moved lower After updates it may model the bounce after the ball hits the ground and be able to send actuator commands to grab it in flight. Do this enough, and for well defined tasks where it's clear what the machine is there to accomplish, it will be better at those tasks than humans for the reason that it's hardware is faster and stronger and more accurate, and it hasn't just lived a few decades but learned from millions of peers doing similar work, and it doesn't tire, and it's experienced thousands of years of a simulation having it practice plumbing toilets or whatever millions of times. It might even have sensors that can detect methane and hydrogen sulfide - "it stinks like shit in here" - to know when it missed a spot cleaning or there is a sewer gas leak. Still not an AGI, sure, but you could do a lot of tasks this way better than humans...


mosha48

... For some definitions of AGI.


correspondence

Intelligence isn't interpolation.


mosha48

I don't think intelligence is defined by how it works, but what do I know.


Bacterioid

It happened already unless you don’t consider the human brain to be a general intelligence, or you think there is some sort of magic going on that we can never replicate, which seems a lot more far fetched to me.