Nonsense. What will prevent it is sufficiently /moral/ robots.
Intelligence is a not a sufficient condition for morality, and
perhaps not even a necessary one.
If one buys Rand's contention that normative philosophy (ethics,
politics) can be rationally derived from objective reality, then we
can assume that very intelligent robots will reason their way into
benevolence toward humans. I, for one, am not convinced of Rand's
claim in this regard, so I would wish to have explicit moral codes
built into any intelligent technology that could not be overridden
except by their human creators. If such intelligences could reason
their way toward better moral codes, they would still have to
convince us humans, with human reason, to build them.