Google Cloud CEO Diane Greene has announced that the company will not renew their contract with the Pentagon’s Project Maven, a drone footage analysis program that uses artificial intelligence (AI) to improve military drone recognition. This project was launched in April 2017, and has received backlash heavily from employees. Although the project was supposed to be restricted for very specific uses, the potential for surveilling entire cities at home and abroad, caused a great deal of concern. Employees petitioned and even resigned from Google due to the raised ethical concerns of the project. About 3100 employees were reported to sign the petition presented to the company’s CEO Sundar Pichai. Although Google insists that this is not as encompassing of a project as people seem to think, the potential of being able to click anywhere and see everything associated with that area in almost real time draws a lot of ethical and privacy concerns.
Greene says the collaboration will cease in March of 2019, which is when the contract is set to expire. Google is also set to update its code of ethics within the next week. Perhaps this could extend a precedence of less (if any) military collaboration in the future. If such Pentagon collaborations were to occur, it might only be for less “invasive” purposes, but we won’t know for sure. The technology used in Project Maven is open source anyway, as Google was mainly providing technical support and expertise for the TensorFlow application programming interface (API). However, the controversy of direct involvement in an overtly military project from one of the largest companies in the world left many employees uncertain of the end uses.
This brings about general conversations about machine learning (ML) and AI and the associated ethical and moral responsibilities for their use. As we approach a less and less private society, we have almost no choice but to trust these companies with our data. Although we tend to give it consensually (read your terms of agreement folks), any indication of our privacy being breached or our data being used without first notifying us is unethical at its core. Mistakes, hacks, and frauds can happen (re: Facebook and Cambridge Analytica situation), and ML and AI will only open up the door to make that worse. Then again, it could also make it better. The ones who have that technology and ability in their hands get to decide that for us. So maybe don’t ditch your touch screen for a flip phone just yet, but be aware of the potential uses, good and bad, when that data falls into unknown hands.
Leave a Reply