Google does a lot of stupid things. All giant corporations are the same in this respect. But it takes a special effort to do something truly terrible. This is where Google’s Project Nimbus comes into the spectrum.
Project Nimbus is a joint effort between Google, Amazon and the Israeli government that provides futuristic surveillance capabilities through the use of advanced machine learning models. Like it or not, this is part of the future of state security, and it’s no scarier than many other similar projects. Many of us even use similar technologies in and around our homes.
Where things get dark and ugly is what Google says about Project Nimbus’ capabilities using the company’s technology:
Nimbus training documents highlight the “face, facial landmark, emotion detection capabilities of the Google Cloud Vision API,” and in a Nimbus training webinar, a Google engineer confirmed for an Israeli client that it would be possible to “process data through Nimbus to determine if someone is lying”.
Yes, the company that gave us the awesomely bad YouTube algorithms now wants to sell algorithms to determine if someone is lying to the police. This is a science that Microsoft has abandoned (opens in new tab) because of its inherent problems.
Unfortunately, Google disagrees so much that it retaliates against people within the company who speak out against it.
I won’t go too deep into the politics at play here, but the whole project is designed so that the Israeli government can hide what it’s doing. According to Jack Poulson, the former head of Google Enterprise Security, one of the main goals of Project Nimbus is to “prevent the German government from requesting data related to the Israel Defense Forces for the International Criminal Court,” according to The Intercept. (Israel is said to be committing crimes against humanity against Palestinians, according to some people’s interpretation of the laws.)
However, it really doesn’t matter what you think about the Israel-Palestine conflict. There is no good reason to provide this kind of technology to a government of any size. That’s what Google does evil.
The supposed capabilities of Nimbus are frightening, even if the Google Cloud Vision API was 100% correct, 100% of the time. Imagine police body cameras that use artificial intelligence to help you decide whether to charge and arrest you. However, it all becomes horrifying when you consider how often machine learning systems get things wrong.
This isn’t just Google’s problem. All you have to do is search for content moderation on YouTube, Facebook or Twitter. 90% of the initial work is done by computers using moderating algorithms that make wrong decisions far too often. However, Project Nimbus would do more than just delete your mean comment — it could cost you your life.
No company offers that kind of AI until the technology matures to a state where it never makes mistakes, and that will never happen.
Look, I’m all about finding the bad guys and doing something about them, like most people. I understand that law enforcement, whether the local police department or the IDF, is a necessary evil. Using AI for that is unnecessary evil.
I’m not saying Google should just stick to writing the software that powers the phones you love and not try to branch out. I’m just saying there’s a right way and a wrong way — Google chose the wrong way here, and now it’s blocked because the terms of the settlement don’t allow Google to stop participating.
You should make up your own mind and never listen to someone on the internet who has a soap box. But you also need to be well informed when a company that was founded on the principle of “Don’t be evil” comes full circle and becomes the evil it warned us about.