Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The term has all the same issues, does it not?

It could be useful for a similar reason as the euphemism treadmill. We could leave behind all of the misguided assumptions about AI with the old 'artificial intelligence' nomenclature and move forward with 'synthetic intelligence' which has our new understanding of what systems like GPT-4 can do.



I'm thinking the problematic part of the term isn't the "artificial" part, but the "intelligence" part.

Since nobody actually knows what "intelligence" is, the word will mean to people whatever they want it to mean.


I’m partial to Plausible Regurgitation


>Since nobody actually knows what "intelligence" is

Everybody knows what intelligence is. Even if we can't agree on a precise definition, it's pretty obvious that it's the thing that humans and other animals do that involves learning, reasoning, planning, and problem solving. We can also agree that being successful at certain tasks constitutes intelligence. Solving a math problem is intelligence. Writing a poem is intelligence.


>Everybody knows what intelligence is

Much like...

"Everyone knows what porn is"

"Everyone knows who god is"

"Everyone knows what beauty is"

The devil is in the details and rather generic words that describe a gradient can never capture the exact nature of what we're trying to define in specific situations.


> The devil is in the details

Only if you care about those details. Almost no one does.

In almost any conversation, everyone does in fact know what intelligence, porn, god and beauty are. Yes, all those ideas are fuzzy at the borders, but we almost never need to resolve them in detail when talking about them. When we do, then yes, things get tricky and there's a lot of disagreement - but at the end of the day, as the phrase I once read on the Internet goes, it all has to add up to normality. You can still work with fuzzy, casual concepts, even though you can't define them precisely.


You can never capture the exact nature of anything outside of logic and math. That's too high of a bar. Philosophers who have worked on this problem like Wittgenstein talk about concepts in terms of family resemblances, not exact definitions. If I'm trying to understand whether a system is intelligent, I don't need a logical proof. I learn whether it is intelligent by testing whether it can successfully do many of the same things that other intelligent systems do.


But words are meant to convey meaning to other people, so what the word means to others is more important than what it means to you.

This sort of problem is common with language, and is a great example of why I'm not really on board with using natural language for technical things.


>But words are meant to convey meaning to other people, so what the word means to others is more important than what it means to you.

I pretty much agree with that, so I'm not sure where the disagreement is here. Let me go back to the original statement I was responding to.

>Since nobody actually knows what "intelligence" is, the word will mean to people whatever they want it to mean.

If I tell you someone is intelligent, you roughly know what I am talking about. Just because it's hard to formalize that doesn't mean that that the word can mean whatever people want it to mean. For example, if I tell you my friend is intelligent, you would be wrong to interpret that as meaning that my friend has red hair, because hair color is irrelevant to the traits that we normally associate with intelligence. The fact that there are right and wrong ways of interpreting my sentence implies that there is some generally agreed upon notion of what intelligence is, even if that notion is fuzzy and has grey areas.


> I'm not sure where the disagreement is here

I'm not sure we are disagreeing. I'm just having a discussion.

> If I tell you someone is intelligent, you roughly know what I am talking about.

Correct, because the context (you're talking about a human, and I know roughly what that means with humans) narrows the possibilities. But even there, it's a vague sort of intuitive knowledge, like trying to say what "art" is.

But when it comes to other areas -- such as machines -- context doesn't help narrow the possible meanings. What does saying a machine is "intelligent" mean? If you ask a machine learning person, you'll get a reasonably specific answer. If you ask the average person on the street, you'll get very, very different answers.

The reason is because we don't know what "intelligence" actually is. We don't even know, with any specificity, what it is in humans -- which is why psychologists assert that there are multiple kinds of intelligence (even if they disagree about how many there are).

> even if that notion is fuzzy and has grey areas.

I don't disagree at all. But the notion has more fuzzy and gray areas than solid ones. As an example, when most people imagine an "artificial intelligence", what they're really imagining is "consciousness". Is consciousness required for intelligence? Who knows? The answer to that depends on what you mean by "intelligence" and we don't agree enough on what that means to have that sort of discussion without beginning by defining the terms.


I don't think the difference between humans and machines matters here. We could ignore the "artificial" aspect and just focus on how we would decide whether some alien biological species is intelligent. I would say that the alien is intelligent if it displays the ability to learn, reason, form abstractions, and solve problems across a wide range of domains. I would apply the same criteria to a machine because I don't think the implementation details matter. It doesn't matter whether you are made of carbon or silicon, or whether you are running a neural network or propositional logic.


> I would say that the alien is intelligent if it displays the ability to learn, reason, form abstractions, and solve problems across a wide range of domains.

I don't disagree with this. What I'm saying is that that definition, while reasonable and I agree, is one that we've just decided on for this conversation.

It isn't one that would be considered complete and correct in all discussions about intelligence.

> I don't think the implementation details matter.

I agree, for the definition of intelligence you just cited. But my point is that "intelligence" is not well-defined or understood. I'm genuinely surprised that people think this is a controversial stance -- I really thought it was well-understood.

We can settle on a definition for rhetorical purposes (and, I would argue, that's mandatory in order to have any solid discussion about intelligence), but any definition we agree on will leave out a lot of things that people consider part of "intelligence".


No, that's not a definition that we've just decided on for this conversation. It's a core part of what people are talking about when they talk about intelligence. Just because we can't precisely define it to capture everyone's intuitions and edge cases doesn't mean that there aren't core features to the concept that everybody agrees on. In other words, anyone who says that learning, reasoning, abstraction, and problem solving aren't a part of intelligence is objectively wrong.


I disagree. I think that nobody knows what it is, as demonstrated by the fact that there is such a wide disagreement about what it is.

> We can also agree that being successful at certain tasks constitutes intelligence. Solving a math problem is intelligence. Writing a poem is intelligence.

As an example, I don't agree that either of those things indicates intelligence all by themselves. We've had programs that nobody would call "intelligent" to do both of those things for decades.


>We've had programs that nobody would call "intelligent" to do both of those things for decades.

So you're right that if I have separate algorithms, each designed for a specific purpose, that those algorithms aren't intelligent. However, if I have a general system that can learn how to solve a math problem, write a poem, and do a bunch of other things that humans can do, then that system is intelligent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: