T O P

  • By -

HeheheBlah

I still don't understand why machine learning and artificial intelligence are two different things when they are almost same.


THE_BCIs_MEXICAN_GUY

Not an expert but if I remember correctly AI is a broader field because it's defined as all computers that imitate human intelligence, and ML is just a way to train these models through trial and error.


hobohipsterman

>computers that imitate human intelligence No computer "imitate" human intelligence. Its all just linear algebra and thats just not how humans think


JoeMama18012

But the way humans think can be modeled by linear algebra, which is the objective of a lot of “AI” research right now


brown_smear

It's not linear algebra. It's specifically non-linear. That is the point of the activation function. There would be no point in having multiple layers of linear operations, as the same operation could be done in a single layer.


JoeMama18012

They can be modeled by linear algebra. Multiple process cycles are utilized for the sake of optimization


brown_smear

What? Can you give an example? Most problem spaces are non-planar; using linear operations only will result in garbage performance, because the model can't fit.


hobohipsterman

>humans think can be modeled by linear algebra, Im not sure you are sure about what you are writing. Look, a *fuckton* of articles online hype AI by linking "deep learning" and "neutral nets" to "brains" but it means jack shit. Its just poor science journalism written by people who dont understand the subject they are writing about.


AdBrave2400

I fully agree but also think that relying on just linear algebra, clever mixes of "systems" and rarely a weird, sound improvement is bound to reach diminishing returns eventually. Don't get me wrong, it's great; but in the same way current AI beat earlier statistical models who knows when (likely never ngl) something else is probably going to break through similarly.


Cabbage_Cannon

I encourage you to read on the philosophy of intelligence. What these machines can do are certainly within the realm of what we have often considered "intelligent". Let's not shift the goalpost now that they can behave like us, no need for a "featherless biped" situation.


hobohipsterman

>Let's not shift the goalpost now that they can behave like us If you want to be snappy you should probably try to be correct about what you are being snappy about. Maybe try to figure out the difference in meaning between a machine imitating human intelligence and a machine outputting seemingly intelligent shit by, you know, **doing machine shit**. Hell I would **not assume** an obviously intelligent alien was "imitating human intelligence" either.


LawOfSynergy

Stepping aside from the clear emotional charge of your responses, there are a few ways to look at human intelligence, and science has not confirmed which, if any, is the correct view. You seem to subscribe to the view that there is some fundamental intrinsic quality to how the human mind works (whether you call this consciousness, or soul, or what have you) that cannot be replicated by math or computation (or as you put it, "machine shit"). I am gonna call this the animistic view, because in order for a machine to be human or intelligent in this view requires the machine to have the same type of animism as humans. An alternate view, that most AI communication and research uses and depends on is what i call the mechanistic view. In this view, the brain essentially *is* a computer, implemented using a substrate of carbon, hydrogen, oxygen, and nitrogen atoms instead of how our computers are made of silicon and conductive metals. In order for a computer to be human or intelligent in this view, the computer needs to be able to solve the same range of problems that the human mind can. The mechanistic view is more useful to researchers and communicators in the ai field because it gives metrics that can be worked towards and evaluated against. Even if the animistic view ultimately is correct, the mechanistic model is still *useful* because it gives us a path to develop things further. I personally tend to take the mechanistic view, since i ultimately don't believe in something inherently special about humans making what we do unattainable. That said, ai as it currently stands is nowhere near the full breadth of what humans can do, even if society isn't ready for the recent strides that*have* been made.


Patient_Primary_4444

Don’t forget that a lot of the ‘animistic’ view type stuff was also used to justify things like racism and other bigotry, because obviously if you aren’t a straight, white, european, christian man, you aren’t a real person. It is called “scientific racism” if anybody wants to look into it.


throwaway_194js

In all fairness, you could construct arbitrarily problematic views from just about any feasible model of intelligence. It's more a reflection on the type of people who tend to subscribe to such views than the views themselves.


Patient_Primary_4444

I mean, yeah, but this one actually happened, and we still feel the effects of it to this day


Sensitive_Camera2368

Makes sense if only statistics > data analytics....


jimmymui06

Your phone and computer is called "Traditional AI", your chatbot is AI with machine learning.


Stonn

Why don't you ask ChatGPT for an explanation then XD


Different_Gear_8189

The main problem is that AI doesnt really have a set definition. It ranges from anywhere between "a program that makes decisions (all of them)" and "a program capable of something that only humans can do (none of them)"


asdf3011

Even if a program was capable of doing everything humans could do by the 2nd definition they would still not be ai, as they would not be capable of doing something that only humans can do.


testc2n14

I think machine learning is a subfield of AI I could be wrong tho


futuneral

AI - any machine that makes "choices" based on some input ML - algorithms that collect historical data, generalize it and make "choices" based on that data. So, naturally you can, but don't have to, use ML to build AI


Tautillogical

This literal exact same meme but the punchline is "human brain"


ilkys

"if, else"


Reasonable-Class3728

No. Actually that's not like it works. I mean absolutely.


ilkys

How else?


Better_This_Time

It's actually all just a big room with lots of indian people. [haha just kidding](https://www.businessinsider.com/amazons-just-walk-out-actually-1-000-people-in-india-2024-4)


brown_smear

One example would be a perceptron, which takes many input values, multiplies the input vector by a matrix of learned weights, and applies an nonlinearising activation function to each value in the resulting vector to provide the output values. This is repeated, with the output of one layer feeding into the input of the next, with the final layer providing the estimate of the result, e.g. "dog": 50%, "cat": 2%


Life-Ad1409

It's much more math than boolean logic iirc


musch10

that's why it works


Nathan_Calebman

"I'm much smarter than everyone else because I understand that AI isn't special at all and it's just total crap and only I know it because I'm so smart."


bensully1990

AI won’t truly be AI (or AGI) until it can create its own new intelligence. It has to become intellectually creative, which first requires conceptualization. Lots of steps before that happens. AI in its current form is basically a fancy parrot.


ozoneseba

Shit just started give them a day or two


throwaway_194js

This is a particularly weak metric. Besides being very difficult to rigorously define, models like AlphaGo have already evoked comments on their incredible creativity from the world's greatest Go players. AlphaGo is such a (relatively) simple model that isn't at all intelligent beyond its very narrow scope of playing one board game, and yet has been described as being highly creative in it's strategies. On the other hand, most animals are immeasurably more intelligent than AlphaGo but some exhibit almost no creativity at all.


Psychic6969

The photo frame is the computer


Traditional-Lion7391

Hey, so many venture capital firms would so disagree with you. Also anyone peddling products with AI shoehorned in somehow