#269872 - 05/13/14 05:55 PM
Re: Threats on the horizon
[Re: Arney]
|
Pooh-Bah
Registered: 03/13/05
Posts: 2322
Loc: Colorado
|
Edit: Actually, come to think of it, the WOPR almost blew up the world and it didn't even realize that it wasn't playing a game! Much like modern day politicians! I think I'm going to have to go back and watch that War Games movie again. I remember it was a fun one to watch, but that was so long ago my memory is fuzzy.
|
Top
|
|
|
|
#269896 - 05/14/14 04:18 AM
Re: Threats on the horizon
[Re: Bingley]
|
Carpal Tunnel
Registered: 04/28/10
Posts: 3164
Loc: Big Sky Country
|
Looking at the Chinese Room thought experiment from Philosophy 101 we can surmise there's more to consciousness than just electrical activity. Some scientists think there's a quantum element to it. Still that doesn't necessarily mean it's tied to organic processes per se. I feel that true sentience is going to be possible for a machine; further, I believe that everything that's no impossible is inevitable. Better hope our robot overloads are wiser and more benevolent than their erstwhile human masters!
_________________________
“I'd rather have questions that cannot be answered than answers that can't be questioned.” —Richard Feynman
|
Top
|
|
|
|
#269901 - 05/14/14 06:59 AM
Re: Threats on the horizon
[Re: Arney]
|
Rapscallion
Carpal Tunnel
Registered: 02/06/04
Posts: 4020
Loc: Anchorage AK
|
IIRC, sentience could be achieved building a purpose driven program sophisticated enough to recognize productive algorithmic content and expand it's programming. Programs in the range of a few hundred terrabytes should be sophisticated enough to "learn". Providing a storage capacity of a few hundred exobytes could allow a program to at least gain self awareness. Sentience would be less likely. But the chance is significant enough.
Ones and zeros can be random, or they can have purpose. Humans perceive purpose. If machines perceive purpose, it will be interesting to see how they respond. The missles in the silos may be operated by antiquated computing systems, but humans control those systems. Who controls the humans, and how?
_________________________
The ultimate result of shielding men from the effects of folly is to fill the world with fools. -- Herbert Spencer, English Philosopher (1820-1903)
|
Top
|
|
|
|
#269902 - 05/14/14 10:16 AM
Re: Threats on the horizon
[Re: Bingley]
|
Addict
Registered: 01/13/09
Posts: 574
Loc: UK
|
>What I haven't figured out is what makes sentience. I know I have >a brain (at least allegedly) and my mind is the software that runs >on that hardware (see above), but I cannot explain how my cerebral cortex, or some other part or combination of parts of my brain allows sentience to happen.
That's what philosophers call 'the mind body problem'. you can think of rabbits, but no one looking at your brain is going to find that image. So where is the mind? It's certainly affected by the brain as brain damage proves. But it seems to be more than the brain, so what is it?
>Computers as they are now do not have minds and even the very best >Turing Test-passing AI cannot be said to be sentient. Until we >figure out what makes humans sentient it strikes me as being >unlikely that we'll be able to create sentient computers.
Apart from the fact that computers seem a long way from passing the Turing test (at the moment, it's only believable if you accept you are talking to someone who is insane). The test itself, as you suggest, is wrong. To have consciousness, judgment, ideas... you have to be telling the truth when you say you have them. A computer will one day be able to say it has all these, but won't in reality. qjs
|
Top
|
|
|
|
#269907 - 05/14/14 02:28 PM
Re: Threats on the horizon
[Re: quick_joey_small]
|
Geezer
Registered: 06/02/06
Posts: 5357
Loc: SOCAL
|
The only thing I know about sentience I learned from reading Isaac Asimov's Robot series and Foundation series. (Spoiler alert: the final book in the Foundation series links the two)
I don't see robots gaining sentience in my lifetime. AI -- sure, we'll have robots making decisions within their programming, but they won't know why and they won't care. I kinda chuckle when I watch the news and see a robot being sent in. Robot or radio controlled machine? Does it make decisions on its own or is it being controlled by a human at the other end of a wireless tether? We're a long way from robotic sentience...
|
Top
|
|
|
|
#269917 - 05/14/14 05:27 PM
Re: Threats on the horizon
[Re: Bingley]
|
Pooh-Bah
Registered: 09/15/05
Posts: 2485
Loc: California
|
According to this article, the Pentagon is working on developing autonomous systems with morals/ethics. The Office of Naval Research will award $7.5 million in grant money over five years to university researchers from Tufts, Rensselaer Polytechnic Institute, Brown, Yale and Georgetown to explore how to build a sense of right and wrong and moral consequence into autonomous robotic systems. Uh, and who's morals are we going to pattern this program on exactly? The same types of people calling the shots now?
|
Top
|
|
|
|
#269918 - 05/14/14 07:18 PM
Re: Threats on the horizon
[Re: Arney]
|
Old Hand
Registered: 03/08/03
Posts: 1019
Loc: East Tennessee near Bristol
|
(Puts on cynic hat)
Follow the money. Who's friend, brother in law, big contributor, etc. is on the research team?
(Takes off cynic hat)
As for who's morals are in the programming, that's variable. Exact morality decisions would change depending on theater of operations. For example, a roadblock here in the US for a natural disaster would have a whole lot tighter rules of engagement compared to one with a higher probability of suicide bombers.
|
Top
|
|
|
|
#270037 - 05/19/14 02:06 PM
Re: Threats on the horizon
[Re: Phaedrus]
|
Veteran
Registered: 12/12/04
Posts: 1204
Loc: Nottingham, UK
|
Looking at the Chinese Room thought experiment from Philosophy 101 we can surmise there's more to consciousness than just electrical activity. The Chinese Room thought experiment is a bit of misdirection. None of the components of the room understand Chinese. The room as a whole understands Chinese. This is the Systems Reply, and although Searle has addressed it he doesn't really get it. Some scientists think there's a quantum element to it. Notably Penrose. But he also is making some basic mistakes. He thinks humans can solve the Halting Problem, or to put it another way, that Goedel's Incompleteness Theorem applies to machines but not to humans. He's wrong. I could argue these points at more length, but I don't think it's on-topic for this forum. Especially as we both agree that AI is probably possible, even if we disagree about the mechanisms it can or can't use.
_________________________
Quality is addictive.
|
Top
|
|
|
|
#270039 - 05/19/14 04:10 PM
Re: Threats on the horizon
[Re: Bingley]
|
Geezer in Chief
Geezer
Registered: 08/26/06
Posts: 7705
Loc: southern Cal
|
This thread is just too much. Can't we get back to a debate concerning the best steel for a survival knife/machete,whacking thingee? You know, the really important stuff....
_________________________
Geezer in Chief
|
Top
|
|
|
|
|
|
|
|
|
|
1
|
2
|
3
|
4
|
5
|
6
|
7
|
8
|
9
|
10
|
11
|
12
|
13
|
14
|
15
|
16
|
17
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
|
26
|
27
|
28
|
29
|
30
|
|
0 registered (),
836
Guests and
34
Spiders online. |
Key:
Admin,
Global Mod,
Mod
|
|
|