A few days ago, London-based developer DeepMind (an Alphabet company) and thousands of other tech leaders in artificial intelligence (AI) signed a pledge vowing to never develop any “lethal autonomous weapon.”
The group, which included Elon Musk, Skype founder Jaan Tallinn, and AI researchers Yoshua Bengio, Stuart Russell, and Jürgen Schmidhuber, wrote in their letter that “We should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.”
Leaving More Questions Than Answers
MIT physics professor Max Tegmark also signed the pledge. He believes the vow shows how AI leaders around the world are “shifting from talk to action,” and that the pledge draws lines on AI development for military use.
It’s difficult to say what is and what isn’t an autonomous lethal weapon. Where can humans fall into the weaponry system? How can we make entire countries adhere to this pledge, rather than a select number of companies and researchers around the world? And who is in charge of regulating, enforcing, and punishing these companies and individuals?
A Call to Action
The letter concluded with the following paragraph, but it’s impossible to know if and when governments will get involved:
We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons. These currently being absent, we opt to hold ourselves to a high standard: we will neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons. We ask that technology companies and organizations, as well as leaders, policymakers, and other individuals, join us in this pledge.
Setting the Standard?
Military analyst Paul Scharre is an accomplished author on the future of AI and warfare. He believes that the pledge won’t have an effect on international politics. He elaborated, “What seems to be lacking is sustained engagement from AI researchers in explaining to policymakers why they are concerned about autonomous weapons.”
Scharre says most governments already agree with the pledge, and that “the real debate is in the middle space, which the press release is somewhat ambiguous on.”
AI technology is rapidly advancing, and it’s hard to imagine that all companies would abide by this pledge. With that being said, we hope there is more to this pledge than just a statement; intent followed by action would give us all some peace of mind.Tags: AI, AI and machine learning, AI and ML, AI App Developer, AI app developer London, AI App Development, AI app development London, AI-powered lethal weapons, artificial intelligence, DeepMind, London AI app developer, london mobile app developer, mobile app development London, tech and politics, weaponry