In a few weeks, the Advanced Technology External Advisory Council (ATEAC) was scheduled to come together for its first meeting. At that meeting, we were expected to “stress test” a proposed face recognition technology policy. “We were going to dedicate an entire day to it” (at least 1/4 the time they expected to get out of us.) The people I talked to at Google seemed profoundly disturbed by what “face recognition” could do. It’s not the first time I’ve heard that kind of deep concern – I’ve also heard it in completely unrelated one-on-one settings from a very diverse set of academics whose only commonality was working at the interface of machine learning and human computer interaction (HCI). It isn’t just face recognition. It’s body posture, acoustics of speech and laughter, the way a pen is used on a tablet, and (famously) text. Privacy isn’t over, but it will never again be present in society without serious, deliberate, coordinated defense.
For Google (and therefore all of us, since we’re all affected by Google), part of the serious, deliberate, coordinated defense against the negative impacts of AI is their own internal policy. The tech giants in general realize that while governments have a critical role in determining and enforcing what’s legal, they have their own responsibilities, challenges, and affordances as immensely powerful transnational forces. As Brad Smith said at Aspen in 2017 (and probably a lot of other places, and I’m just paraphrasing here) “The government is important and we respect that, but they have to recognize that we are the battlefield. The war is being fought on us.” In other words, tech has a responsibility to itself and to the rest of us to act immediately on the information it acquires. Tech must comply with law when the law comes, but they can’t just sit and wait for the law to come.
What Google wanted from ATEAC was to “stress test” the policy they’d come up with internally. They said they chose their external advisory council on the basis of several factors:
- Knowing things that Google doesn’t know or do in-house.
- Diversity, sampling across broad spectrums. (The Googlers I knew said the company doesn’t believe in binaries, so no one was meant to represent a particular class).
- Being extremely likely to be forceful, clearly articulated, and critical — to say exactly what they thought regardless of implications, political correctness, etc.
- Yet also, being the kind of people who could sit down at a table and listen, who cared enough about being right to update their positions when they learned new things, and were sufficiently respectful and cordial that all voices would be heard.
This post was originally published by Joanna Bryson, a member of Google’s Global Advisory Council, on her blog, Adventures in NI. Read the rest of the post there.