These days, employees at every major tech company, like Facebook, Amazon, and Google, have something to complain about. Whether it’s data security, wages, or gender equality, there has been enough backlash in the office to feed media outlets consistent headlines for months. Well, Google’s at it again.
With its open-source artificial intelligence (AI) tool, TensorFlow, Google’s positioned itself to be a leading machine learning and AI developer. But employees of the tech titan are concerned about the Department of Defense’s (DoD) use of the tool. It has sparked a debate that will surely shape how tech companies approach working with government military programs.
Don’t Be Evil
More than 3,000 employees signed a letter to end Google’s involvement in a DoD drone surveillance program, referred to as “Project Maven”. The AI-powered project detects vehicles and other objects in video footage produced by military drones. The letter, addressed to CEO Sundar Pichai, says the company’s involvement goes against their corporate code of conduct motto, “Don’t Be Evil”.
Employees signing the letter ask for a policy to be created that states Google and its contractors will never engage in building warfare technology.
A clause from the letter vehemently pushes for an anti-war stance: “This contract puts Google’s reputation at risk and stands in direct opposition to our core values. Building this technology to assist the US government in military surveillance — and potentially lethal outcomes — is not acceptable.”
Back and Forth
Google isn’t moving fast to appease their employees, however. The San Francisco-based developer countered its employees’ letter with a statement of their own. The reply letter states that the company’s involvement is only for “non-offensive purposes” and its technology will be used “to flag images for human review and is intended to save lives and save people from having to do highly tedious work.”
The company did concede that “any military use of machine learning naturally raises valid concerns” and assured its employees that the company is “actively engaged [company-wide] in a comprehensive discussion of this important topic.” Another reassurance included: “Google-supplied technology would not ‘operate or fly drones’ and ‘will not be used to launch weapons.'”
The employees countered Google’s reply with yet another letter, saying, “While this eliminates a narrow set of direct applications, the technology is being built for the military, and once it’s delivered it could easily be used to assist in these tasks.”
Project Maven began in April 2017, and it’s not entirely clear what Google is providing for the DoD, other than the use of its TensorFlow technology and explicit acknowledgment that it’s a Pentagon research project to improve analysis of drone footage.
As with most moral and ethical dilemmas, everyone has their own opinion of Google’s involvement with the military. But the employees’ letter did raise some valid concerns: Google stands out because of its reach to billions of people daily. How can tech companies be held accountable if Google’s employees can’t hold their employer accountable?Tags: AI, AI App Development, artificial intelligence, drone applications, drones, Google, news, pentagon, politics, San Francisco mobile app developer, tech, tech and politics, tech news, technology advancement, uav, US federal government