Public debate is heating up over the future development of autonomous weapon systems.1 Some concerned critics portray that future, often invoking science-fiction imagery, as a plain choice between a world in which those systems are banned outright and a world of legal void and ethical collapse on the battlefield.2 Yet an outright ban on autonomous weapon systems, even if it could be made effective, trades whatever risks autonomous weapon systems might pose in war for the real, if less visible, risk of failing to develop forms of automation that might make the use of force more precise and less harmful for civilians caught near it. Grounded in a more realistic assessment of technology—acknowledging what is known and what is yet unknown—as well as the interests of the many international and domestic actors involved, this paper outlines a practical alternative: the gradual evolution of codes of conduct based on traditional legal and ethical principles governing weapons and warfare.
MoreLast month, philosopher Patrick Lin delivered this briefing about the ethics of drones at an event hosted by In-Q-Tel, the CIA’s venture-capital arm. It’s a thorough and unnerving survey of what it might mean for the intelligence service to deploy different kinds of robots.
More