?
2010-09-17 08:14:09 UTC
- the "Carl saegan" argument: I remember some time ago, I was reading a book regarding the SETI project and there Carl Saegan mentions that aliens might not be interested in us, because we would be like ants to them. In a similar fashion i think that an artificial intelligence would threat us like ants and won't be so sorry if it kills us.
- The self-limitation problem: This i think is an interesting argument and it's not addressed usually, in other words: the artificial intelligence might reach a phase when it will decide to limit it's own intelligence because it might be afraid that when it becomes too smat it will be dangerous for it's own sake - in this case the artificial intelligence won't be so smart and humane - but will be just like a human...in that case the AI might be another...Hitler (human like intelligence...).
- Immortality - so this is the only rel argument that the so clled "scientists" nowadays mention about the AI - in other words the ai will be immortal and hence to it the concept of death would be bizarre and foreign and it might kill humans because it will think that death is not that...bad.
opinion?
10x