As a computer science major, I see a lot of analyses on this page that look at artificial intelligence through a human perspective: a la "Advanced intelligence will kill all humans because it thinks it is superior to humans".
That's actually not very likely because you there is no benefit to programming such an algorithm into your robot - it wastes processing power and in turn would make a slower crappier AI. (I'm not saying AI isn't a potential threat just that threatening ai's justifications shouldn't be anthropomorphized - see below)
There are also two types of AI, dumb and general. Dumb AI permeates our society (I've even wrote simple ones myself) and can only operate within the confines of the code put there by the human programmer. Sometimes these can even have a dynamic memory via a database like sql (Google has this feature and would be halfway between dumb and general intelligence - able to edit memory of search terms based on what people input, but human programmers still have control of the source code).
Then there's general AI, which can learn anything it is told to do and Self edit its code when learning any and every new task independent of a human (no one has coded a true general AI yet, though Kurtzweil says its 2046). Remember when I said don't look at AI through a human lens? A major discussion within computer science about general intelligence is of unintended consequences. Say theoretically someone writes a general intelligence where the mission is to get the owner (who is an avid businessman) stacks of benjamins (100$ us bills for those not versed in such slang

). Pretty much this program is programmed to maximize the income of $100 bills.
So he/she connects their creation to the Internet to make some moolah - at first it might learn that ok ok playing the stock market and buying and selling on eBay/alibaba can do some good according to its programming. So the creator is raking it in, everything's good right? Except the program eventually learns that to maximize the number of $100 bills it could learn to hack all of the worlds electronic stores of money banks etc. Which would be bad even though under the programming, that is a better result for the program. But then - hey what is paper money made of? Carbon, oxygen, and hydrogen mostly, exactly what its creator and all other life is made of - maximizing the income of $100 bills even more (even though the creators atoms are part of the tons of $100 bills

). Ok, ok, so let's make sure there's code that prevents these scenarios from happening - but there are an uncountable other ways to maximize income of $100 bills that are detrimental to humans that the general intelligence will now take.
A good example of this is the super intelligence in the 2004 movie I, Robot, where a supercomputer given the mission to save humans whenever possible realizes that by ending war under a robot autocracy is the best way to save human lives. Robots/AI/lines of code as we know them right now don't have morality per se, but rather lines of code, billions/trillions of logic gates (transistors), and a whole bunch of numbers - that's all they are. And sometimes, despite the best of intentions, the calculations and number crunching that control the end result don't go exactly as planned.