Question:
Do you also believe that an Artificial intelligence would destroy humanity?
?
2010-09-17 08:14:09 UTC
What do you think? Goethe is known to have said that "humans have always feared of some higher intelligence, while their own stupidity is what they should be afraid of" - while most stupid people (e.g. american government, fbi and so on...) will certainly be afraid of the potential danger of the AI - I think that there are still some arguments to point that an AI can be indeed dangerous:

- the "Carl saegan" argument: I remember some time ago, I was reading a book regarding the SETI project and there Carl Saegan mentions that aliens might not be interested in us, because we would be like ants to them. In a similar fashion i think that an artificial intelligence would threat us like ants and won't be so sorry if it kills us.

- The self-limitation problem: This i think is an interesting argument and it's not addressed usually, in other words: the artificial intelligence might reach a phase when it will decide to limit it's own intelligence because it might be afraid that when it becomes too smat it will be dangerous for it's own sake - in this case the artificial intelligence won't be so smart and humane - but will be just like a human...in that case the AI might be another...Hitler (human like intelligence...).

- Immortality - so this is the only rel argument that the so clled "scientists" nowadays mention about the AI - in other words the ai will be immortal and hence to it the concept of death would be bizarre and foreign and it might kill humans because it will think that death is not that...bad.

opinion?

10x
Six answers:
anonymous
2010-09-17 08:16:28 UTC
No. Currently destroying humanity is abject stupidity.



Our stupidity will kill us long before we come up with something smart enough to do it more efficiently.
Intrinsic Random Event
2010-09-17 08:28:58 UTC
I think it will be a long time before any artificial intelligence is in a position to harm humanity. We are much more likely to be threatened by our species own natural aversion to intelligence.

If a robot ever becomes like Hitler, it won't be any different to the countless real people alive today who could potentially wreak the same kind of havoc as Hitler. Human Hitler, artificial Hitler, a Hitler is a Hitler. What happened in 1939 could rear its ugly head at any time.

It is humanity's never-ending struggle to keep the more socially destructive behaviour at bay. An artificial intelligence could indeed be dangerous - if it is our creation, then it is bound to be a reflection of our own nature, our on values.Could artificial intelligence represent a danger greater than that which we face every day? I'm not sure.
?
2016-06-01 11:03:19 UTC
Religious implications of making a machine that could think as well as a human? The same as the religious implications of making a machine that could run as fast a human, or swim as fast or fly as high (or higher) then a human - hey! There is a machine that can fly higher than a human. So what? Our consciousness is located in our soul. So is our ability to reason and have feelings. We share those qualities and abilities with many animals on this planet. That has no religious implications either. If a scientist found a way to impart a SPIRIT into a machine or any non-human, then THAT would have religious implications. We are made in the image of God. Three parts. Body. Soul. Spirit. Animals have two of those parts. Body. Soul. (yes, animals can think and feel emotions) Plants only have a live Body. Don't know why, but people who read the bible in spanish have no problem distinguishing the difference between soul and spirit. I see this problem (using soul and spirit interchangeably) often with people who only speak english. Edit: @ teran It would mean beyond doubt that our intelligence had a physical basis, hence no soul needed to explain it. I didn't say we needed a soul to explain intelligence. Ours happens to reside in our soul. The machine you speak of would have intelligence residing in a computer. So? There is a difference though, our soul has the opportunity to reside after the physical death of our bodies, in our spirit. Animals don't have spirits, neither do plants (or computers). For that reason, animals, plants and computers cannot live in heaven after they die. The computer might have legal responsibilities for it's actions (those kinds of laws would be needed once those machines are formed), but it would have no spiritual (eternal) responsibilities unless you became a god and could impart a spirit into your machine. God holds eternal creatures spiritually responsible for our actions. Not temporal creatures or machines.
Ben
2010-09-17 08:18:04 UTC
As long as all AI is programmed with Isaac Asimov's three laws of robotics everything should be fine.



The Three Laws of Robotics are as follows:



A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
oil field trash
2010-09-17 14:45:35 UTC
I would be much more concerned about a moron with a couple of thermonuclear weapsons than I would be of AI.
anonymous
2010-09-17 08:18:15 UTC
I dont think that we would ever let AI get to that point but you never know.


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Loading...