If Asimov's three rules don't work, there is another way.
Here are my three rules for preventing a robot Apocalypse, or at least to prevent another atrocity that was the I, robot movie.
Not allowing freedom of movement.
No direct connection to the internet or other AI's.
Hard wired failsafes and manual back up systems.
... it's a fascinated time to be alive... all, that we need, is a shift from monetary based to resource based economy... because when not, then "Additionally, progress has raised concerns that robots will take the place of human workers, and that humans will eventually be forced to live with machines that mimic human behaviors. There's also concern that criminals could take advantage of advancements in AI, using a "speech synthesis system" to impersonate another human, for example, or mining smart phones to uncover personal information.
"Will 2001: A Space Odyssey's HAL 9000 become a reality soon? No, but scientists fear that technology is heading that way."
Human level intelligence in AI is far closer than most people think, or want to accept. Given exponential growth of most information technology it isn't unreasonable to excpect 'HAL' level machines in the next 15 years..
Hmm, quite interesting. I'm sure, as others probably are, that many of the critical advancements in making robots with human-like intelligence has occured in the private sector of development, as opposed to the public eye.
I'm betting there is a lot of concern behind closed doors. Allow yourself to envision robots that can establish their own organizations, mine for elements, improve themselves, etc. It's a sobering thought. Some action should be taken to prevent selecting the wrong path with robotics development.