Q&A: Using Technology to Advance, Not Suppress, Human Dignity and Equality

Jan 25 2017
By B. Rose Kelly
Source Woodrow Wilson School

This Q&A is part of a series featuring panelists who will participate in the Princeton-Fung Global Forum. This public event, to be held March 20-21 in Berlin, is being organized by the Woodrow Wilson School of Public and International Affairs. Register here.

In an age where technology is often used to harm, can it still be harnessed to advance human dignity and equality?

This is the mission of the Center for Democracy & Technology (CDT), a global nonprofit committed to advancing digital rights. Led by President and CEO Nuala O’Connor, the center is committed to ensuring that human rights in the physical world translate to digital spheres.

In this Q&A, O’Connor describes how technology can be designed to protect human rights in the digital world. She will be a panelist at the upcoming Princeton-Fung Global Forum, “Society 3.0+: Can Liberty Survive the Digital Age?” in the session “Panel 4: Communication Silos and Information Overload.”

Q. Has the internet created communication silos or echo chambers? If so, how? While the internet has certainly connected, has it divided us, too?

O’Connor: It is clear now, more than ever, that the internet has created echo chambers, while transforming what it means to consume media and news. As platforms continue to deploy algorithms to curate and personalize content to their users – through their behaviors and patterns –we will have to think through what both editorial oversight and journalistic ethics mean in the digital world. How can we encode the ethics and norms of the best of journalism into the algorithm itself if there are no human editors to sort through so-called “fake news” and satire? How do we ensure a digitally literate community, aware of variations in editorial oversight and content creation? And what can or should platform creators do to foster an informed populace, break down “internet bubbles” or echo chambers and foster dialogue? Do we need new norms to help users engage thoughtfully, even empathetically, with those of differing viewpoints? How do we think more critically about online communities as a space for engagement and learning while trying to combat the pervasiveness of harassment?

Q. Your work at the Center for Democracy & Technology is driving policies that “advance the rights of the individual in the digital age.” How does technology threaten human rights? And what is your organization doing to protect those rights and increase equality?

O’Connor: Technology, like almost any tool, has the potential to help or to harm. At CDT, we are committed to a vision of technology advancing democratic ideals – particularly human dignity and equality – and embedding these values into emerging technologies. At CDT, we think constantly about how technology can protect human rights and battle inequality in its design and deployment. We are committed to making sure that the rights we have in the physical world – for example, freedom of expression and the right to privacy – translate into the digital world – for example, the right to post content online, the right to private spaces not surveilled by the government, and the right to control one’s personal data as an extension of one’s self. Our project teams, through their writing, advocacy and convening, work to inform and influence both government and industry policy so that it reflects these democratic values in our digital world. One of our most exciting projects is our work on digital decisions, which aims to ensure fairness in algorithmic decisionmaking by rooting out unintended biases in the design. If unchecked, these biases can reproduce prejudice and discrimination found in the physical world. We are in the process of finalizing a roadmap for developers so that at every step in the process of creating an algorithm, they can consciously look for and test for bias.

Q. You served as the first chief privacy officer for the U.S. Department of Homeland Security (DHS). Can you tell us more about that position and how it influenced your views on security and privacy? Can you also give us some examples of policies that were enacted while you were in the position?

O’Connor: I believe that protecting the privacy and dignity of individuals is a mission for government, as is protecting citizens from physical harm. I was privileged to serve under the first DHS Secretary, Tom Ridge, who believed that protecting the rights of citizens was part of the mission and duty of the department. As the first CPO, I had the opportunity to create a multidisciplinary office of lawyers, policy experts, compliance leaders, technologists and personnel from a variety of experiences and expertise – coming from both within and outside government service. I believe their disparate viewpoints made a stronger office and a stronger department, one which we both supported, and, on more than one occasion, criticized. That model of a functioning, operational and yet “ombuds-like” role has been replicated and embedded in other agencies, leading to a growth in processes around privacy impact assessments, embedding privacy values in acquisition programs and more. I’m particularly proud of our work in making privacy part of systemic reviews of new programs and products and of our work advancing the rights of non-U.S. persons under Privacy Act systems of review. Even more, I believe the office advanced awareness of privacy as a value that is not in conflict with national security, but rather a global, universal value that is necessary to individual freedom.

Q. With regards to human rights, what is the biggest threat technology poses? And what is its greatest strength?

O’Connor: I worry that, without proper vigilance, we will slowly enter an era where there is no zone of privacy, and with that loss of privacy will come a loss of freedom of thought, association and expression. I also think there are soft costs to weakening privacy or an increase in surveillance: a loss to human creativity and experimentation – in thought, in communication, in whatever form your creative, personal life takes. That is why I feel so strongly about encryption.  Not only is end-to-end encryption demonstrably necessary to the secure functioning of systems, infrastructure, and, indeed, government itself, it is also essential to human dignity and serendipitous discovery.

Another concern I have is the assumption that technology is infallible. Because people design technology in their own context, designs often reflect that creator’s reality and the power hierarchies familiar to them. As more and more of our lives are lived online, the opportunity for traditional power structures to be reinforced, or worse, for authoritarian regimes to leverage technology against its citizens, become magnified. At the same time, however, technology’s greatest strength is its democratizing force. We have yet to realize the full potential of the internet to serve everyone, but we’re getting there; more than three billion people are connected to the internet and have access to information, community and business opportunities, which would otherwise be impossible.

Q. What draws you to the connection between technology and human rights? Why have you spent your career committed to technology policy, privacy and information?

O’Connor: I believe in the power of internet-enabled technologies to be the greatest amplifier of individual voice that history has seen. I believe in the democratizing power of technology, and yet, I also see the risks to unthinking deployment. We are still at a very early period in the digital age, and the decisions we make from a policy and design standpoint will affect individuals, communities and countries for generations to come. It is exciting and daunting to be part of these dialogues – about the boundary of self and company, self and community and self and state – during this time of transformational change. We’re setting the course for a new epoch, one that I hope will be rich in opportunity for all persons, regardless of status or origin.