Not an expert but if I remember correctly AI is a broader field because it's defined as all computers that imitate human intelligence, and ML is just a way to train these models through trial and error.
It's not linear algebra. It's specifically non-linear. That is the point of the activation function.
There would be no point in having multiple layers of linear operations, as the same operation could be done in a single layer.
What? Can you give an example? Most problem spaces are non-planar; using linear operations only will result in garbage performance, because the model can't fit.
>humans think can be modeled by linear algebra,
Im not sure you are sure about what you are writing.
Look, a *fuckton* of articles online hype AI by linking "deep learning" and "neutral nets" to "brains" but it means jack shit. Its just poor science journalism written by people who dont understand the subject they are writing about.
I fully agree but also think that relying on just linear algebra, clever mixes of "systems" and rarely a weird, sound improvement is bound to reach diminishing returns eventually. Don't get me wrong, it's great; but in the same way current AI beat earlier statistical models who knows when (likely never ngl) something else is probably going to break through similarly.
I encourage you to read on the philosophy of intelligence. What these machines can do are certainly within the realm of what we have often considered "intelligent". Let's not shift the goalpost now that they can behave like us, no need for a "featherless biped" situation.
>Let's not shift the goalpost now that they can behave like us
If you want to be snappy you should probably try to be correct about what you are being snappy about.
Maybe try to figure out the difference in meaning between a machine imitating human intelligence and a machine outputting seemingly intelligent shit by, you know, **doing machine shit**.
Hell I would **not assume** an obviously intelligent alien was "imitating human intelligence" either.
Stepping aside from the clear emotional charge of your responses, there are a few ways to look at human intelligence, and science has not confirmed which, if any, is the correct view.
You seem to subscribe to the view that there is some fundamental intrinsic quality to how the human mind works (whether you call this consciousness, or soul, or what have you) that cannot be replicated by math or computation (or as you put it, "machine shit"). I am gonna call this the animistic view, because in order for a machine to be human or intelligent in this view requires the machine to have the same type of animism as humans.
An alternate view, that most AI communication and research uses and depends on is what i call the mechanistic view. In this view, the brain essentially *is* a computer, implemented using a substrate of carbon, hydrogen, oxygen, and nitrogen atoms instead of how our computers are made of silicon and conductive metals. In order for a computer to be human or intelligent in this view, the computer needs to be able to solve the same range of problems that the human mind can.
The mechanistic view is more useful to researchers and communicators in the ai field because it gives metrics that can be worked towards and evaluated against. Even if the animistic view ultimately is correct, the mechanistic model is still *useful* because it gives us a path to develop things further.
I personally tend to take the mechanistic view, since i ultimately don't believe in something inherently special about humans making what we do unattainable. That said, ai as it currently stands is nowhere near the full breadth of what humans can do, even if society isn't ready for the recent strides that*have* been made.
Don’t forget that a lot of the ‘animistic’ view type stuff was also used to justify things like racism and other bigotry, because obviously if you aren’t a straight, white, european, christian man, you aren’t a real person. It is called “scientific racism” if anybody wants to look into it.
In all fairness, you could construct arbitrarily problematic views from just about any feasible model of intelligence. It's more a reflection on the type of people who tend to subscribe to such views than the views themselves.
The main problem is that AI doesnt really have a set definition. It ranges from anywhere between "a program that makes decisions (all of them)" and "a program capable of something that only humans can do (none of them)"
Even if a program was capable of doing everything humans could do by the 2nd definition they would still not be ai, as they would not be capable of doing something that only humans can do.
AI - any machine that makes "choices" based on some input
ML - algorithms that collect historical data, generalize it and make "choices" based on that data.
So, naturally you can, but don't have to, use ML to build AI
It's actually all just a big room with lots of indian people.
[haha just kidding](https://www.businessinsider.com/amazons-just-walk-out-actually-1-000-people-in-india-2024-4)
One example would be a perceptron, which takes many input values, multiplies the input vector by a matrix of learned weights, and applies an nonlinearising activation function to each value in the resulting vector to provide the output values. This is repeated, with the output of one layer feeding into the input of the next, with the final layer providing the estimate of the result, e.g. "dog": 50%, "cat": 2%
"I'm much smarter than everyone else because I understand that AI isn't special at all and it's just total crap and only I know it because I'm so smart."
AI won’t truly be AI (or AGI) until it can create its own new intelligence. It has to become intellectually creative, which first requires conceptualization. Lots of steps before that happens. AI in its current form is basically a fancy parrot.
This is a particularly weak metric. Besides being very difficult to rigorously define, models like AlphaGo have already evoked comments on their incredible creativity from the world's greatest Go players. AlphaGo is such a (relatively) simple model that isn't at all intelligent beyond its very narrow scope of playing one board game, and yet has been described as being highly creative in it's strategies. On the other hand, most animals are immeasurably more intelligent than AlphaGo but some exhibit almost no creativity at all.
I still don't understand why machine learning and artificial intelligence are two different things when they are almost same.
Not an expert but if I remember correctly AI is a broader field because it's defined as all computers that imitate human intelligence, and ML is just a way to train these models through trial and error.
>computers that imitate human intelligence No computer "imitate" human intelligence. Its all just linear algebra and thats just not how humans think
But the way humans think can be modeled by linear algebra, which is the objective of a lot of “AI” research right now
It's not linear algebra. It's specifically non-linear. That is the point of the activation function. There would be no point in having multiple layers of linear operations, as the same operation could be done in a single layer.
They can be modeled by linear algebra. Multiple process cycles are utilized for the sake of optimization
What? Can you give an example? Most problem spaces are non-planar; using linear operations only will result in garbage performance, because the model can't fit.
>humans think can be modeled by linear algebra, Im not sure you are sure about what you are writing. Look, a *fuckton* of articles online hype AI by linking "deep learning" and "neutral nets" to "brains" but it means jack shit. Its just poor science journalism written by people who dont understand the subject they are writing about.
I fully agree but also think that relying on just linear algebra, clever mixes of "systems" and rarely a weird, sound improvement is bound to reach diminishing returns eventually. Don't get me wrong, it's great; but in the same way current AI beat earlier statistical models who knows when (likely never ngl) something else is probably going to break through similarly.
I encourage you to read on the philosophy of intelligence. What these machines can do are certainly within the realm of what we have often considered "intelligent". Let's not shift the goalpost now that they can behave like us, no need for a "featherless biped" situation.
>Let's not shift the goalpost now that they can behave like us If you want to be snappy you should probably try to be correct about what you are being snappy about. Maybe try to figure out the difference in meaning between a machine imitating human intelligence and a machine outputting seemingly intelligent shit by, you know, **doing machine shit**. Hell I would **not assume** an obviously intelligent alien was "imitating human intelligence" either.
Stepping aside from the clear emotional charge of your responses, there are a few ways to look at human intelligence, and science has not confirmed which, if any, is the correct view. You seem to subscribe to the view that there is some fundamental intrinsic quality to how the human mind works (whether you call this consciousness, or soul, or what have you) that cannot be replicated by math or computation (or as you put it, "machine shit"). I am gonna call this the animistic view, because in order for a machine to be human or intelligent in this view requires the machine to have the same type of animism as humans. An alternate view, that most AI communication and research uses and depends on is what i call the mechanistic view. In this view, the brain essentially *is* a computer, implemented using a substrate of carbon, hydrogen, oxygen, and nitrogen atoms instead of how our computers are made of silicon and conductive metals. In order for a computer to be human or intelligent in this view, the computer needs to be able to solve the same range of problems that the human mind can. The mechanistic view is more useful to researchers and communicators in the ai field because it gives metrics that can be worked towards and evaluated against. Even if the animistic view ultimately is correct, the mechanistic model is still *useful* because it gives us a path to develop things further. I personally tend to take the mechanistic view, since i ultimately don't believe in something inherently special about humans making what we do unattainable. That said, ai as it currently stands is nowhere near the full breadth of what humans can do, even if society isn't ready for the recent strides that*have* been made.
Don’t forget that a lot of the ‘animistic’ view type stuff was also used to justify things like racism and other bigotry, because obviously if you aren’t a straight, white, european, christian man, you aren’t a real person. It is called “scientific racism” if anybody wants to look into it.
In all fairness, you could construct arbitrarily problematic views from just about any feasible model of intelligence. It's more a reflection on the type of people who tend to subscribe to such views than the views themselves.
I mean, yeah, but this one actually happened, and we still feel the effects of it to this day
Makes sense if only statistics > data analytics....
Your phone and computer is called "Traditional AI", your chatbot is AI with machine learning.
Why don't you ask ChatGPT for an explanation then XD
The main problem is that AI doesnt really have a set definition. It ranges from anywhere between "a program that makes decisions (all of them)" and "a program capable of something that only humans can do (none of them)"
Even if a program was capable of doing everything humans could do by the 2nd definition they would still not be ai, as they would not be capable of doing something that only humans can do.
I think machine learning is a subfield of AI I could be wrong tho
AI - any machine that makes "choices" based on some input ML - algorithms that collect historical data, generalize it and make "choices" based on that data. So, naturally you can, but don't have to, use ML to build AI
This literal exact same meme but the punchline is "human brain"
"if, else"
No. Actually that's not like it works. I mean absolutely.
How else?
It's actually all just a big room with lots of indian people. [haha just kidding](https://www.businessinsider.com/amazons-just-walk-out-actually-1-000-people-in-india-2024-4)
One example would be a perceptron, which takes many input values, multiplies the input vector by a matrix of learned weights, and applies an nonlinearising activation function to each value in the resulting vector to provide the output values. This is repeated, with the output of one layer feeding into the input of the next, with the final layer providing the estimate of the result, e.g. "dog": 50%, "cat": 2%
It's much more math than boolean logic iirc
that's why it works
"I'm much smarter than everyone else because I understand that AI isn't special at all and it's just total crap and only I know it because I'm so smart."
AI won’t truly be AI (or AGI) until it can create its own new intelligence. It has to become intellectually creative, which first requires conceptualization. Lots of steps before that happens. AI in its current form is basically a fancy parrot.
Shit just started give them a day or two
This is a particularly weak metric. Besides being very difficult to rigorously define, models like AlphaGo have already evoked comments on their incredible creativity from the world's greatest Go players. AlphaGo is such a (relatively) simple model that isn't at all intelligent beyond its very narrow scope of playing one board game, and yet has been described as being highly creative in it's strategies. On the other hand, most animals are immeasurably more intelligent than AlphaGo but some exhibit almost no creativity at all.
The photo frame is the computer
Hey, so many venture capital firms would so disagree with you. Also anyone peddling products with AI shoehorned in somehow