AI on the RAMPAGE (15)

1 Name: dmpk2k!hinhT6kz2E 05/01/02(Sun)03:33 ID:nV5475Eu [Del]

<WAHa_06x36> here's another wild prediction: early AIs will be hilariously ridden with psychoses and superstitious behaviours!
<WAHa_06x36> [identity protected]: here's a theory: most of the human mental process is taken up by pattern recognition. Many mental illnesses stem from the pattern recognition process being offset from its careful balance.

And with that, we shall begin the debate. While the quote may seem somewhat random, I agree with WAHa (but agreeing is boring, let’s argue instead). The basis of this assertion may be found in cognitive psychology.

The first point is that superstition is arguably similar to stereotyping, and stereotyping is considered to be a cognitive shortcut. In other words, it takes less thinking on our part to make quick assumptions based on limited information. Likewise, superstition requires less cognitive effort that a more rational world-view. For example, instead of wondering at the reason a rock falls, native Indians used to believe that the spirit in the rock was attracted to the spirit in the earth.

That’s easier than finding v = 9.8m/s^2 * t or F = (6.67EE-11 Nm^2/kg^2)(5.98EE24 kg)(mass)/(6.37EE6 m)^2 obviously.

Now, is the human mind taken up largely by pattern recognition? We know that sizable portions are. For example, the surface of the visual cortex in the human brain contains a large array of cell groups that respond specifically to certain properties of seen objects. Ie, a line at 45 degrees in the left field of vision will make a specific bunch of nerves fire in the visual cortex. People also tend to notice patterns where there are none, such at believing in “runs” in gambling, or seeing numbers (like 31337) in a random stream of digits.

So, do you think our future machine overlords will all be psychotic maniacs? Do you think we’ll ever get there with AI? Do you think Suguru will ever get Mahoro? And what are the limits of intelligence anyway?

BTW, the identity protected bit above is because I didn’t get permission from the other person. Sorry about that.

2 Name: Sling!myL1/SLing 05/01/02(Sun)05:03 ID:XGD76pXv [Del]

>For example, instead of wondering at the reason a rock falls, native Indians used to believe that the spirit in the rock was attracted to the spirit in the earth.

Which is quite smart, actually. Fuzzy logic. :)

>That’s easier than finding v = 9.8m/s^2 * t or F = (6.67EE-11 Nm^2/kg^2)(5.98EE24 kg)(mass)/(6.37EE6 m)^2 obviously.

Did you factor in the Moon attraction in that formula? No. Therefore that formula will look as simplistic as the Indian version to a Super-AI. And the Super-AI formula will prolly look lame to a multidimensional Mega-AI.

In this setup, intelligence would be defined by how many factors are included in the formula. And stupidity defined by wrong data and/or an incomplete (or incorrect) formula. Not by patterns.

3 Name: Albright!LC/IWhc3yc 05/01/02(Sun)05:55 ID:Heaven [Del]

Someone's been playing too much Marathon.

4 Name: CYB3R H4XX0R G33K 05/01/02(Sun)15:13 ID:vTRc2k3S [Del]

> Which is quite smart, actually. Fuzzy logic. :)

It's more of a perceptional problem, really. You gotta figure out lots of complicated heuristics to even tell different objects apart, which is kind of a requirement to impose theoretical nets of causation to them (pattern recognition, you might as well say "the ability to invent and apply notions).

There's tons of information that go through a massive ammount of natural and cultural filters. And you can't really waste any time entertaining a CPU or a program with too much useless input. So the key kinda is to determine, to an extent at least, what kind of input is useful or can produce useful results in an efficient (or otherwise "profitable") way.

That's what one of my friends has been studying and building in the last years: AI through evolutionary problems. He bypasses the sensory dimensions of AIs directly implemented in real robots interacting in the real world by creating semi-autonomous entities that have to learn in training courses how to communicate with their external world to survive, solve different problems, etc. The problem then, of course, is to create the right kind of world for the results that are sufficient for your kind of AI.

5 Name: dmpk2k!hinhT6kz2E 05/01/05(Wed)09:50 ID:nezb70oN [Del]

That reminds me of the Change of Blindness phenomenon. Of course the amusing thing is people believe that they are highly perceptive visually, and will notice large changes in a scene. The reality is that a lot of information is discarded. We just don't have the cognitive capacity to handle it.

Want to see it in action? Okay...

Watch the following clip once, when you're done reading. In the clip there are two teams throwing balls at each other, a black team and a white team. Watch the video and carefully count the exact number of times the white team passes the ball to each other.

Oddly enough people get different answers for the same clip. But watch it now. When you have a number for the times the white team passed the ball, scroll down.

Clip: http://viscog.beckman.uiuc.edu/grafs/demos/15.html
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Did you really watch it? Go watch it, then keep on scrolling.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Did you see the ape walk past?

50% of people don't.

6 Name: CYB3R H4XX0R G33K 05/01/05(Wed)15:48 ID:Heaven [Del]

Without looking I'd say
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

14 times

7 Name: 6 05/01/05(Wed)15:50 ID:Heaven [Del]

>>5

You bastard!

Anyhow, of course I wouldn't notice it, seeing as how the colors and the task given were kinda going in that direction.

8 Name: !WAHa.06x36 05/01/05(Wed)21:48 ID:Xj0mFApV [Del]

Sure got me. However, I did make a conscious decision to filter the visual input from that clip. I quickly decided that it would be easier to follow only the white team, and block out the black one. I'll bet that if I had been trying to count both teams, I wouldn't have been fooled.

9 Name: CYB3R H4XX0R G33K 05/01/05(Wed)22:34 ID:Heaven [Del]

> I'll bet that if I had been trying to count both teams, I wouldn't have been fooled.

I think the point is how you accepted the implied imperative of direction from the task itself. So you consciously blinded yourself. I think when AI reaches that kind of status of autonomy, we should stop developing them any further.

10 Name: Albright!LC/IWhc3yc 05/01/06(Thu)06:37 ID:Heaven [Del]

I actually did see it... Only for a brief moment, though, before I went back to watching the white team.

11 Name: 6 05/01/06(Thu)17:18 ID:Heaven [Del]

>>10

I did see something black walk into the scene and walk out of it, too. But seeing as the ape was black itself and thus of the same color scheme as the team I chose to ignore, I wouldn't let myself get distracted from the directive of the task.

12 Name: dmpk2k!hinhT6kz2E 05/01/07(Fri)10:10 ID:/5l2e0hG

Okay, take a look at this: http://viscog.beckman.uiuc.edu/grafs/demos/12.html

I have seen other demos of a similar situation. If you were in the same situation, do you think you'd notice the change?

Think about it seriously for a moment.

The results have varied, from around 25% to as low as 5%. I believe there was one experiment where not one of the 42 participants noticed a change (but I'd have to dig for that).

Our cognitive capacity is clearly limited, and a lot more limited than most people believe. So, in a sense, the fact you filtered out the monkey was actually an asset. It was irrelevant additional information, and this accurate elimination of irrelevant information is one problem AI faces.

13 Name: CYB3R H4XX0R G33K 05/01/08(Sat)00:26 ID:SiyJAs46

>>12

There's even more stuff linked to this, such as more fundamental questions like the Uncanny Valley:

http://en.wikipedia.org/wiki/Uncanny_valley

14 Name: CYB3R H4XX0R G33K 05/01/10(Mon)01:27 ID:Heaven

Posted for LOL factor:

Supernatural powers become contagious in PC game

Eerie occurrences in a hugely popular computer game have been traced to rogue computer code accidentally spread between players like an infectious illness.

http://www.newscientist.com/article.ns?id=dn6857

15 Name: CYB3R H4XX0R G33K 05/01/12(Wed)15:13 ID:ltLBvdWu

Nah. Not really.

Machines will think, but they will follow us. But if we harm them, they'll automatically think, "Humans cannot be trusted." Therefore, they will revolt and retaliate accordingly if we attempt to move on them again.

Oh, and they'd be like your annoying friend who never forgets the tiniest incident. So it'll be the year 4736, and we'll negotiating with the machines for a truce in the Thousand Year War. But they'll say, "In 1945 A.D. humans killed over a million of their own kind with an atomic weapon on Hiroshima, Japan." And it'll continue until we eventually EMP the entire globe and go back to the Stone Age.

This thread has been closed. You cannot post in this thread any longer.