Would advanced AI possess Buddha-nature?

If you're new to the forum or new to Buddhism, this is the best place for your questions. Responses require moderator approval before they are visible.
User avatar
PuerAzaelis
Posts: 826
Joined: Thu Jun 02, 2016 1:37 pm

Re: Would advanced AI possess Buddha-nature?

Post by PuerAzaelis » Mon Oct 23, 2017 9:54 pm

boda wrote:
Sun Oct 08, 2017 12:09 am
Take Google DeepMind for example ...
Since Deep Mind is a system based on mechanisms, reactions and responses, maybe it is more like Mara than Buddha.
And nobody in all of Oz. No Wizard that there is or was.

boda
Posts: 1667
Joined: Thu Jul 03, 2014 8:40 pm

Re: Would advanced AI possess Buddha-nature?

Post by boda » Mon Oct 23, 2017 11:12 pm

Grigoris wrote:
Sat Oct 21, 2017 8:30 am
boda wrote:
Sat Oct 21, 2017 12:08 am
They say Go requires intuition and creativity...
The creativity and intuition required for Go is based on a finite set of specific positions and moves of the pieces. A computer can just crunch the numbers and make a prediction based on this large, yet finite, set thus simulating what appears to be intuition and creativity.
This works with games like chess, but as Bristollad and I have tried to explain this is not possible with Go. Though the possibilities are still finite in Go, the range is still far too large.
I am talking about Artificial INTELLIGENCE, you are defining intelligence merely as problem solving.
Intelligence is defined as general cognitive problem-solving skills. AlphaGo is simply a rather narrow (not AGI) intelligence. Many AI researchers believe that an artificial general intelligence, comparable to humans in the capacity to accomplish as wide a range of goals, may be developed within a couple of decades.
I have given examples of intelligence beyond the bounds of problem solving, something that computers are completely incapable of right now.
Your point is that problem solving is not intellegence?
Creativity is based on thinking outside of set boundaries and it is emotions that allow us to transcend the boundaries of reason.
There's nothing magic about creativity, and irrationality could be due to brain damage rather than emotional maladaptation.

User avatar
Grigoris
Global Moderator
Posts: 15164
Joined: Fri May 14, 2010 9:27 pm
Location: Greece

Re: Would advanced AI possess Buddha-nature?

Post by Grigoris » Tue Oct 24, 2017 12:09 pm

boda wrote:
Mon Oct 23, 2017 11:12 pm
Your point is that problem solving is not intellegence?
No, my point is that problem solving is only one small aspect of intelligence.
There's nothing magic about creativity...
Didn't say there was.
...and irrationality could be due to brain damage rather than emotional maladaptation.
Yes, but this doesn't change anything that I said.
"My religion is not deceiving myself."
Jetsun Milarepa 1052-1135 CE

"Butchers, prostitutes, those guilty of the five most heinous crimes, outcasts, the underprivileged: all are utterly the substance of existence and nothing other than total bliss."
The Supreme Source - The Kunjed Gyalpo
The Fundamental Tantra of Dzogchen Semde

boda
Posts: 1667
Joined: Thu Jul 03, 2014 8:40 pm

Re: Would advanced AI possess Buddha-nature?

Post by boda » Tue Oct 24, 2017 10:21 pm

Grigoris wrote:
Tue Oct 24, 2017 12:09 pm
my point is that problem solving is only one small aspect of intelligence.
AlphaGo or DeepMind has what we might say is a very narrow intelligence.
... sometimes one needs an intense emotional reaction (outrage, compassion, desire) to spark an intelligent response, to get one motivated to think about solutions.
So you seem to accept that DeepMind is some sort of limited intelligence that solves problems of a very limited range. Obviously, it doesn't require emotions to function or "spark an intelligent response." You say that emotions are sometimes needed to spark intelligence, however. What is it about these 'sometimes' that require emotion?

User avatar
Johnny Dangerous
Global Moderator
Posts: 7456
Joined: Fri Nov 02, 2012 10:58 pm
Location: Olympia WA
Contact:

Re: Would advanced AI possess Buddha-nature?

Post by Johnny Dangerous » Wed Oct 25, 2017 3:13 am

I don't think you can call something without agency 'an intelligence'.
"it must be coming from the mouthy mastermind of raunchy rapper, Johnny Dangerous”

-Jeff H.

boda
Posts: 1667
Joined: Thu Jul 03, 2014 8:40 pm

Re: Would advanced AI possess Buddha-nature?

Post by boda » Wed Oct 25, 2017 4:37 am

Johnny Dangerous wrote:
Wed Oct 25, 2017 3:13 am
I don't think you can call something without agency 'an intelligence'.
Because?

User avatar
Grigoris
Global Moderator
Posts: 15164
Joined: Fri May 14, 2010 9:27 pm
Location: Greece

Re: Would advanced AI possess Buddha-nature?

Post by Grigoris » Wed Oct 25, 2017 9:20 am

boda wrote:
Tue Oct 24, 2017 10:21 pm
What is it about these 'sometimes' that require emotion?
Really? You cannot think of any examples from your life? I am sure you can!
"My religion is not deceiving myself."
Jetsun Milarepa 1052-1135 CE

"Butchers, prostitutes, those guilty of the five most heinous crimes, outcasts, the underprivileged: all are utterly the substance of existence and nothing other than total bliss."
The Supreme Source - The Kunjed Gyalpo
The Fundamental Tantra of Dzogchen Semde

User avatar
Nemo
Posts: 806
Joined: Thu Jan 21, 2010 3:23 am
Location: Canada

Re: Would advanced AI possess Buddha-nature?

Post by Nemo » Wed Oct 25, 2017 12:31 pm

I remember a conversation between a Tulku and Terton decades ago. One mentioned Namkhai Norbu talking about all the world systems where Dzogchen was taught. The terton stated teaching AI or computer based intelligence was problematic because they found it near impossible to see the nature of mind.

User avatar
Cianan
Posts: 19
Joined: Sun May 21, 2017 2:31 pm

Re: Would advanced AI possess Buddha-nature?

Post by Cianan » Wed Oct 25, 2017 2:51 pm

Johnny Dangerous wrote:
Wed Oct 25, 2017 3:13 am
I don't think you can call something without agency 'an intelligence'.
Interestingly, some months ago Facebook shutdown and revised an experimental model of dialogue AI when its learning “led to divergence from human language as the agents developed their own language for negotiating.” The AI spontaneously came to use a language incomprehensible to the researchers in self-training.

https://www.theatlantic.com/technology/ ... ge/530436/

boda
Posts: 1667
Joined: Thu Jul 03, 2014 8:40 pm

Re: Would advanced AI possess Buddha-nature?

Post by boda » Wed Oct 25, 2017 6:41 pm

Grigoris wrote:
Wed Oct 25, 2017 9:20 am
boda wrote:
Tue Oct 24, 2017 10:21 pm
What is it about these 'sometimes' that require emotion?
Really? You cannot think of any examples from your life? I am sure you can!
Where an intense emotional reaction is required to think about solutions to a problem? That happens, but I don’t believe the intense emotion is required, and in fact may interfere with solving the problem.

It’s curious that a Buddhist of all people would fail to see the value of equanimity.

User avatar
Johnny Dangerous
Global Moderator
Posts: 7456
Joined: Fri Nov 02, 2012 10:58 pm
Location: Olympia WA
Contact:

Re: Would advanced AI possess Buddha-nature?

Post by Johnny Dangerous » Wed Oct 25, 2017 7:44 pm

Cianan wrote:
Wed Oct 25, 2017 2:51 pm
Johnny Dangerous wrote:
Wed Oct 25, 2017 3:13 am
I don't think you can call something without agency 'an intelligence'.
Interestingly, some months ago Facebook shutdown and revised an experimental model of dialogue AI when its learning “led to divergence from human language as the agents developed their own language for negotiating.” The AI spontaneously came to use a language incomprehensible to the researchers in self-training.

https://www.theatlantic.com/technology/ ... ge/530436/
AFAIK it was not spontaneous, rather the AI was already programmed to negotiate a new "language" if it was more efficient that natural language.
"it must be coming from the mouthy mastermind of raunchy rapper, Johnny Dangerous”

-Jeff H.

User avatar
Grigoris
Global Moderator
Posts: 15164
Joined: Fri May 14, 2010 9:27 pm
Location: Greece

Re: Would advanced AI possess Buddha-nature?

Post by Grigoris » Wed Oct 25, 2017 8:41 pm

boda wrote:
Wed Oct 25, 2017 6:41 pm
It’s curious that a Buddhist of all people would fail to see the value of equanimity.
It is curious that a non-Buddhist would fail to see the value of emotion.

You also forget that I am a Vajrayana Buddhist. In Vajrayana, emotions are not our enemies.
"My religion is not deceiving myself."
Jetsun Milarepa 1052-1135 CE

"Butchers, prostitutes, those guilty of the five most heinous crimes, outcasts, the underprivileged: all are utterly the substance of existence and nothing other than total bliss."
The Supreme Source - The Kunjed Gyalpo
The Fundamental Tantra of Dzogchen Semde

boda
Posts: 1667
Joined: Thu Jul 03, 2014 8:40 pm

Re: Would advanced AI possess Buddha-nature?

Post by boda » Wed Oct 25, 2017 9:15 pm

Grigoris wrote:
Wed Oct 25, 2017 8:41 pm
boda wrote:
Wed Oct 25, 2017 6:41 pm
It’s curious that a Buddhist of all people would fail to see the value of equanimity.
It is curious that a non-Buddhist would fail to see the value of emotion.

You also forget that I am a Vajrayana Buddhist. In Vajrayana, emotions are not our enemies.
I never suggested that I don't value emotion. With some rereading, you might come to understand what I wrote, specifically that I don’t believe intense emotion is required for problem-solving.

You still haven't explained why intense emotion may be required for problem-solving, if you're claiming this.

I didn't know you're a Vajrayana Buddhist. In any case, emotions shouldn't be considered an enemy to any practicing Buddhist. The only way out is through. :sage:

User avatar
Cianan
Posts: 19
Joined: Sun May 21, 2017 2:31 pm

Re: Would advanced AI possess Buddha-nature?

Post by Cianan » Wed Oct 25, 2017 9:23 pm

Johnny Dangerous wrote:
Wed Oct 25, 2017 7:44 pm
Cianan wrote:
Wed Oct 25, 2017 2:51 pm
Johnny Dangerous wrote:
Wed Oct 25, 2017 3:13 am
I don't think you can call something without agency 'an intelligence'.
Interestingly, some months ago Facebook shutdown and revised an experimental model of dialogue AI when its learning “led to divergence from human language as the agents developed their own language for negotiating.” The AI spontaneously came to use a language incomprehensible to the researchers in self-training.

https://www.theatlantic.com/technology/ ... ge/530436/
AFAIK it was not spontaneous, rather the AI was already programmed to negotiate a new "language" if it was more efficient that natural language.
Perhaps spontaneous wasn't the right word, and it is true that the language came out of increasing the efficiency of the task. In any case, with advanced AI still clearly in its infancy, it is remarkable to behold its present capacity for it to somewhat take on a life of its own in ways that we neither expect nor can truly explain.

User avatar
Grigoris
Global Moderator
Posts: 15164
Joined: Fri May 14, 2010 9:27 pm
Location: Greece

Re: Would advanced AI possess Buddha-nature?

Post by Grigoris » Wed Oct 25, 2017 9:49 pm

boda wrote:
Wed Oct 25, 2017 9:15 pm
You still haven't explained why intense emotion may be required for problem-solving, if you're claiming this.
Yes I have.
"My religion is not deceiving myself."
Jetsun Milarepa 1052-1135 CE

"Butchers, prostitutes, those guilty of the five most heinous crimes, outcasts, the underprivileged: all are utterly the substance of existence and nothing other than total bliss."
The Supreme Source - The Kunjed Gyalpo
The Fundamental Tantra of Dzogchen Semde

boda
Posts: 1667
Joined: Thu Jul 03, 2014 8:40 pm

Re: Would advanced AI possess Buddha-nature?

Post by boda » Wed Oct 25, 2017 10:30 pm

Grigoris wrote:
Wed Oct 25, 2017 9:49 pm
boda wrote:
Wed Oct 25, 2017 9:15 pm
You still haven't explained why intense emotion may be required for problem-solving, if you're claiming this.
Yes I have.
With this?
Grigoris wrote:Emotions, the randomising effect on logic, are a positive evolutionary trait because they allow for reactions that may appear illogical, but ultimately may lead to survival. It is because of this ability to randomise that humans have been able to survive, because of the ability to innovate. A computer, for example, will not get bored and so they will not look for an avenue of change/escape and thus will not be open to innovation. Intelligence needs emotion.
I thought we both dismissed this idea, but I'll revisit it assuming you still believe it merits attention.

In the DeepMind breakout example that I posted earlier, the AI begins learning by performing random actions. Eventually, using reinforced learning, it learns which actions lead to results that accomplish its goal of a high score. Indeed, the AI will never get bored and will keep playing until it reaches its goal. Most people would probably get bored or discouraged and quit before reaching their full potential.

In the breakout game the AI began playing worse than most people would but quickly played better than any human.

The idea that emotions are functional evolutionarily because of a randomizing effect is odd in itself, btw.

User avatar
Johnny Dangerous
Global Moderator
Posts: 7456
Joined: Fri Nov 02, 2012 10:58 pm
Location: Olympia WA
Contact:

Re: Would advanced AI possess Buddha-nature?

Post by Johnny Dangerous » Thu Oct 26, 2017 3:07 am

Cianan wrote:
Wed Oct 25, 2017 9:23 pm
Johnny Dangerous wrote:
Wed Oct 25, 2017 7:44 pm
Cianan wrote:
Wed Oct 25, 2017 2:51 pm


Interestingly, some months ago Facebook shutdown and revised an experimental model of dialogue AI when its learning “led to divergence from human language as the agents developed their own language for negotiating.” The AI spontaneously came to use a language incomprehensible to the researchers in self-training.

https://www.theatlantic.com/technology/ ... ge/530436/
AFAIK it was not spontaneous, rather the AI was already programmed to negotiate a new "language" if it was more efficient that natural language.
Perhaps spontaneous wasn't the right word, and it is true that the language came out of increasing the efficiency of the task. In any case, with advanced AI still clearly in its infancy, it is remarkable to behold its present capacity for it to somewhat take on a life of its own in ways that we neither expect nor can truly explain.

I didn't see any evidence that no one could explain the Facebook AI thing, it seemed like a fairly simple explanation actually. I have no background in AI, but enough of a background in programming that personally, I found nothing at all remarkable about the story, other than that the "language" the AI's invented to speak to each other was pretty funny. Again, they had programmed the AI to come up with it's own language syntax, and English was inefficient to the task. If you broke down what the AI's were saying mathematically (deciding who gets what,if I recall), it was not that surprising. Even with my limited programming knowledge I can conceptualize a series of if>then statements and similar that would enable something roughly like that to happen. It was an interesting story, but I didn't see why it got the press it did, nor what was supposed to be so mysterious about it.

To me it was less a story about AI gaining sentience and more a story about the complexity of our tools making for amusing (and sometimes scary) anecdotes.

it would be interesting to know what the limitations and parameters were on how it used language.
"it must be coming from the mouthy mastermind of raunchy rapper, Johnny Dangerous”

-Jeff H.

daibunny
Posts: 14
Joined: Wed May 07, 2014 2:22 am

Re: Would advanced AI possess Buddha-nature?

Post by daibunny » Thu Oct 26, 2017 4:53 am

This question immediately reminded me of the Mu koan. For those not familiar:
The Koan Mu

This is the main case of the koan, formally called "Chao-chou's Dog":

A monk asked Master Chao-chou, "Has a dog the Buddha Nature or not?" Chao-chou said, "Mu!"

(Actually, he probably said "Wu," which is the Chinese for Mu, a Japanese word.

Mu is usually translated "no," although the late Robert Aitken Roshi said its meaning is closer to "does not have." Zen originated in China, where it is called "Chan." But because western Zen has been largely shaped by Japanese teachers, we in the West tend to use Japanese names and terms.)
From https://www.thoughtco.com/what-is-mu-in-zen-449929

I have no opinion myself except that the original question is interesting in that it makes one look at what it means to be a self and a human.

Mod note: removed non-sequitur complaint about moderation

User avatar
Grigoris
Global Moderator
Posts: 15164
Joined: Fri May 14, 2010 9:27 pm
Location: Greece

Re: Would advanced AI possess Buddha-nature?

Post by Grigoris » Thu Oct 26, 2017 7:33 am

boda wrote:
Wed Oct 25, 2017 10:30 pm
In the DeepMind breakout example that I posted earlier, the AI begins learning by performing random actions. Eventually, using reinforced learning, it learns which actions lead to results that accomplish its goal of a high score. Indeed, the AI will never get bored and will keep playing until it reaches its goal. Most people would probably get bored or discouraged and quit before reaching their full potential.
Boredom and discouragement can be positive traits too. They can be a mechanism by which somebody moves on from something that is not fruitful or productive (playing pointless board games, for example). A computer will not get bored or discouraged and so will play the board game to completion/perfection. So what? How is this a sign of intelligence? Sometimes getting bored and moving on is also a sign of intelligence, but because it is based in emotion (frustration, for example), a computer will not do it. So again we have another clear example of the evolutionary function of emotion and how emotion plays a role in intelligence.
The idea that emotions are functional evolutionarily because of a randomizing effect is odd in itself, btw.
Real life is not always about reasoned and well analysed actions, sometimes taking a risk is what is needed, or even changing the rules of the game...

Now because I am getting bored of this repetitive and circular conversation I am going to make an emotionally based (and intelligent decision) to remove myself from this conversation, since I am tired of making (and supporting) my point repeatedly (unlike a computer which would continue to do it ad nauseum, which is rather unintelligent, I would say).
"My religion is not deceiving myself."
Jetsun Milarepa 1052-1135 CE

"Butchers, prostitutes, those guilty of the five most heinous crimes, outcasts, the underprivileged: all are utterly the substance of existence and nothing other than total bliss."
The Supreme Source - The Kunjed Gyalpo
The Fundamental Tantra of Dzogchen Semde

boda
Posts: 1667
Joined: Thu Jul 03, 2014 8:40 pm

Re: Would advanced AI possess Buddha-nature?

Post by boda » Thu Oct 26, 2017 7:09 pm

Grigoris wrote:
Thu Oct 26, 2017 7:33 am
boda wrote:
Wed Oct 25, 2017 10:30 pm
In the DeepMind breakout example that I posted earlier, the AI begins learning by performing random actions. Eventually, using reinforced learning, it learns which actions lead to results that accomplish its goal of a high score. Indeed, the AI will never get bored and will keep playing until it reaches its goal. Most people would probably get bored or discouraged and quit before reaching their full potential.
Boredom and discouragement can be positive traits too. They can be a mechanism by which somebody moves on from something that is not fruitful or productive (playing pointless board games, for example). A computer will not get bored or discouraged and so will play the board game to completion/perfection. So what? How is this a sign of intelligence?
You seemed to be claiming that the randomizing effect of emotion is functional in problem solving, and that because machines lack emotion they lack this necessary aspect of intelligent problem solving. The DeepMind breakout example clearly dispels this odd notion. We don't even need to venture into theoretical AI.
Sometimes getting bored and moving on is also a sign of intelligence, but because it is based in emotion (frustration, for example), a computer will not do it.
Bordom is an emotion concept that's based in low arousal and unpleasant affect. Machines lack affect because their intelligence is not built on the substrate of a biological organism. We've been though this early on but you don't seem to accept or understand this fundamental difference.
So again we have another clear example of the evolutionary function of emotion and how emotion plays a role in intelligence.
Which is irrelevant to AI because they lack biological bodies.
The idea that emotions are functional evolutionarily because of a randomizing effect is odd in itself, btw.
Real life is not always about reasoned and well analysed actions, sometimes taking a risk is what is needed, or even changing the rules of the game...
In the DeepMind breakout game example, the AI's first actions were utterly random. It was only after evaluating the effects of these random actions that it learned which were effective in accomplishing its goal.

People cheat to gain a selfish advantage. Cooperative behavior can be mutually beneficial to all.

Post Reply

Who is online

Users browsing this forum: No registered users and 7 guests