Monday, February 10, 2014

Ethics Assigment: Artificial Intelligence

WOW!! Computing world is evolving and everyday a new type of technology arrives and improve something, many times this improvement causes a problem. Today, one the major problem that we face is Artificial Intelligence. Acording to this article at  http://www.uufsa.org/sunday/dbmoral_dilemmas.htm, soon humans beings could be replaced BY Humanoid robot or Cyborgs. This problem has already started and by this rate in 2020 a $1,000 dollar computer will have the processing power to match the human brain. And in 2030 the average personal computer will have the processing power of a thousand human brains.

 As you can see this is a serious problem, a problem that has already started. Nowadays the military use drones, to do recon missions, many people have had the necessity of implanting an eye and that eye may act as regular. Will they replace us? At the rate we are going soon, we are going to see robots that looks and act exactly as humans beings, with no difference, it may even be possible to create a humanoid or cyborg or bionic person that is powered by our own food or drinks.

Just think that in a no so far future, humans can be replaced by machines or artificial intelligence, it may start as just a medical tool, or a research object/project. But soon it will evolve, evolve into those machines that can even inpersonate you. If that happens adn those machines would happen to be created: would they have  same rights as humans? would they have souls?.
      See, if we humans create an army of this self reproducting machines, that were designed to be smarted, stronger, faster than us, then they may take over the world. CRAZY HU!!! Haven't you seen I-robot? Terminator? if we want to avoid that we have to be a step ahead of them, and that may include to get a tiny smartchip in our brains so we can be better than the creations. This robots would have software, which means taht someone out there would try to corrupt that software.
What about the robot laws?
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In the movie I-Robot, the doctor tell to Will Smith " the three laws will only lead to something: EVOLUTION".
      I believe that the principle under this dilema is Self-preservation. Technology is AWESOME, but no to the point of being replace with it, or being half techno/human.  If those humanoids were to be created, we have to make sure that we have the control, and no to give to them the power of think and act as humans, don't let them be satient. we have to make sure that technology doesn't mark the extinction of the human race.As you can think, this problem isn't only related to technology but also to: ethics, religions, freedom, and the preservation of the human race.

1 comment:

  1. You're raising some really important questions here, Brother Cinerous! I think this post was stronger in the questions it raised than in the claim it staked. Both are important things to do, but make sure that if your primary occupation is to stake a claim that you do that. :) By the way, have you heard of a philosopher named John Searle? Here's the fun youtube version of his thought experiment about AI: http://www.youtube.com/watch?v=TryOC83PH1g The full version of his ideas can't quite be summed up in a one minute youtube video, but you can probably find a PDF of the paper, "Minds, Brains, and Programs".

    ReplyDelete