Stanford HAI 2019 Fall Conference – Artificially Intelligent Associations


– Good afternoon everybody. My name is Lucy Bernholz. I am the Director of Stanford’s
Digital Civil Society Lab. And it’s my honor to have been asked to organize this conversation. And we’ve got four
extraordinary people here to join us and all of you, because I really would like this to become more workshop-y than lecture-y. We’d asked for a different room setup, but this is what we got. So, what I wanted to
do was actually set up the problem as I see it. The opportunity that’s out
there and then each of our, I’ll introduce quickly the four speakers. But then engage all of
us in thinking about what actually is the problem, or problems that we’re
trying to solve for. What do we already have
underway as strategies, experiments, institutional innovations to address those problems. And what questions do
we have that researchers affiliated with HAI or other universities might be able to contribute to. So there’s sort of three parts to the two hours we have together. And I will say a couple of sentences about the way I see where we are. Each of our panelists will
speak for five or so minutes about their work and what they’re doing, and how they see the problem
and the potential opportunity. And then, we’ll invite you
to both ask some questions about anything you’ve
heard from any of us, offer up the strategies or experiments that you’re involved in
as we try to actually get a handle on all of the different kinds of activities that are underway. And then see if we can
collectively move toward those second two goals of the time, which is what are these different
experiments in structure have in common and what are
the researchable opportunities? So with that background,
let me just tell you the Digital Civil Society
Lab I started here at Stanford in 2014 with Rob Reich. And it’s motivated by a
pretty big fear to be honest. And that fear can be summed up as this. In a time when each of us individually is dependent on digital devices and global networks that
are either corporate created and/or government surveilled, is there actually anything
even remotely resembling an independent civic space,
in the digital world? And our sense of that is
actually that there isn’t. That there really isn’t this
critical important space that both history and
theory tell us are important for democracies to stand. A place where people can get
together of their own volition and take some kind of collective action. And in a digitally mediated world, again where the machinery is primarily commercially
prevailed, provided and government surveilled
that the space doesn’t exist. So the challenge then for
the Digital Civil Society Lab became well how do we recreate some kind of independent space
recognizing it won’t look anything like what we
might have assumed existed in the past and this is not an homage to some nostalgic sense of the past. That what needs to be created in an age when we’re
all digitally dependent is some new form of that
independent civic space. So there we have four hypotheses and this conversation fits
squarely into one of them. So the four hypotheses overall are that we actually need to
simultaneously make progress on creating alternative technologies, on creating new kinds of organizations, on creating new laws and on changing our own social norms about how we interact with digital technologies
and global networks. So those are the four areas of change, if you will, that’s domain of change. And this conversation
which was cheekily called Artificially Intelligent Associations, ’cause come on, how could
you pass up that name? That’s just a great name. Is squarely in the
bucket of thinking about the organizational forms
that we need to create, that we may need to
create that we’re tweaking or modifying already to actually manage and govern and hold and possibly destroy digital data, algorithms, insights on and behalf of some
kind of public benefit. So if you think about
in the American context, 140 years ago now, the not for profit organization, which most of us in the U.S.
are fairly familiar with, as a corporate form codified in law, with a couple of distinct characteristics that distinguish it from
a commercial corporation, was created to manage the contribution of private financial
resources for public benefit. Right? That’s what non-profits
do at the most mechanistic instrumental level I could
possibly describe them. And in order for that to work, there’s a set of actual
legal structures around them. There’s a set of corporate
requirements within them. And then there’s a normative
understanding of them. And the challenge I think
we face collectively now when the resource is digital data, or digitized algorithms, or the insights that derive
from them is to figure out what’s the mechanism that we’re
gonna use to move forward. Because to simply assume that
a not-for-profit corporation that was designed to do that
with financial resources can just automatically serve that purpose for digital resources is not gonna work. We’re seeing it all over the place. It doesn’t work now,
it’s not going to work. We need some form of,
perhaps it’s a mere tweak. Perhaps it’s a simple
change to the corporate code that governs non-profits. Or perhaps it’s an entirely
different kind of institution, and I use that very loosely. When I’m thinking of institution, I’m not necessarily thinking
of the building we just left. (laughing) Or the campus we’re meeting on. It might look very, very
different than that. But that’s what we’ve built to date. And we’re building other things now. So that’s the ark of the conversation. I should say that that is only half of from the labs perspective, that’s only half of the questions we have about the intersections
between artificial intelligence data global networks
and associational forms. And I’m happy to talk a lot about the other half of those questions. But for the purpose of this conversation, we wanted to focus it on
what kind of institutions are we and will be creating
to serve this purpose in our democracies in the digital age. So the four speakers we have, I’ll introduce them very briefly, bring a variety of perspectives to this. And they’re each going to talk about both what it is they’re doing that
fits into this conversation, what the challenges are
that they’re facing, how they think about what
form they’ve adapting or tweaking and then together
we’ll try to identify some of those common elements in some of the research questions. So starting at my right here. Eileen Donahoe is the Director of the Global Digital Policy Incubator and a well globally recognized
human rights expert. The Global Digital Policy Incubator is a sister institution
here on the Stanford campus to the Digital Civil Society Lab. And I’m always delighted to get a chance to work with Eileen. To Eileen’s right is Sean McDonald, who is the leader of an organization called Digital Public.io
and a widely published thinker on the governance of
data and data supply chains. To Sean’s right is Jasmine McNealy, a professor at the University of Florida. A fellow at the Berkman Klein Center and a fellow also of the
Digital Civil Society Lab, as is Sean, I should say that. And then at the far end of the table, Terah Lyons is the Executive Director of the Partnership for AI, which is a San Francisco
based global multi stakeholder partnership working on
artificial intelligence. So I’m gonna ask Eileen to kick us off with some framing about this question of associational form
artificial intelligence civil society in human rights, Eileen. – Great, so first off I’m
gonna try to talk not too fast because there’s a lot to be said on this incredibly rich topic. Which is why I apologize
for bringing slides, but I think it’ll help. First off, the institutional
form I work with is the existing international
human rights framework, which I see as simultaneously out of sync with certain aspects of
the digital global realm. As well as very well-suited to it. I wanna say that in
the human rights space, a lot of work has already
been done by people focused on freedom of expression, privacy, emerging work on democratic participation, the right to democratic participation with the disinformation threat. Very little has been done. She is on the cutting edge of defining what it looks like to protect independent civic space
in a digital world. So, I think that’s important context. I’m gonna start by talking
about what’s out of sync. – Okay. – And what’s out of
sync and then what fits. So this first slide is, you know, it’s a little complicated but
I sort of see four features of the globalized digitized space that make it inherently
challenging to work with the international
human rights framework. The first of which is the
basic trans-border mode of the Internets operation is disruptive on it’s face to a international order based on the concept of nation states. Sovereign nation states
with the obligation to protect citizens rights, defined by territorial boundaries. Instantaneous extra territorial reach is the default rather than the exception. Governments have the primary
obligation to keep citizens safe and to protect their liberty. They are really struggling
with how to do this in our trans-border global realm. So that’s the first thing. The second thing I will
mention is digitization itself. Digitization of everything
IOT, digitization of society. Again on the security side,
digitization is created all kinds of systemic
society wide vulnerabilities. Governments don’t know what to do about and we’re really flat
footed in that regard. But also in the liberty front, digitization have tremendous implications for obviously privacy which in turn has gigantic consequences for liberty. The simple idea being
if everything you say and do is tracked and monitored it will squelch where you feel free to go, what you say, who you meet with. And that goes to the heart of do we, can we have independent civic space. And I will also just
say, it turns out privacy is so much more important
to democratic society in the enjoyment of human rights, than private sector tech
companies have recognized. And that’s kind of, I think
there’s some are catching on. I think there’s some new potential there. New business models based on
privacy, trust and all that. But we’ll see. Third big disruptive feature is this trend toward privatization of governance. You all have heard this idea
of private sector platforms functioning as quasi-sovereigns. On the security side they
play a much bigger role in protecting peoples data and housing it. They own, operate and secure the critical civilian
internet infrastructure. So, they’re playing a out sized role, compared to what was envisioned in the original human rights framework. On the freedom side of the equation, these mega digital platforms also have a gigantic role as
quasi-sovereigns dictating the parameters of freedom of
expression through algorithms, terms of service community
guidelines, et cetera. As well I will note,
governments are increasingly turning to the private sector to take over judicial function. They are outsourcing the responsibility for assessing criminality and
they are even overreaching beyond that they’re basically
outsourcing censorship by asking companies to do
what democratic governments themselves could not do. So that’s an interesting trend. And then the fourth feature
that is obviously relevant to algorithmically driven societies is the basic move towards
governance by machine, where machine decisions
are being relied upon in many realms of our lives
that impact citizens rights, education, health, policing,
sentencing, parole. All kinds of things. But governance actors in the public sector often lack an understanding of the basis of those algorithmic decisions. And they lack any ability
to scrutinize them. So that means loss of transparency and a basic loss of
democratic accountability and any prospect for remedy, which are essential elements
of the human rights framework. Okay. That’s the out of sync part. I don’t say that in every audience. But I do in an audience concerned about protecting civic space. Because I don’t wanna undermine the international human rights framework, which I think is very important
for geopolitical reasons. So what are the features
of the existing framework that are peculiarly well-suited to governments in the digital realm? And here I wanna highlight
four things as well. The first of which is the starting premise of the inherent dignity
of the human person, and the centrality of the
human person in governance. I think that is a really
essential first step to human centered governance. Oh sorry, I brought the wrong one. Well let me look up here. I brought the wrong slide, oh well. Second feature. Let me see if I can pull this up by, well one is global legitimacy. I don’t have the slide here. Another feature is the global legitimacy that has been established
through multi lateral and multi stakeholder negotiation. It happened in a context of
the crisis of World War II. The human rights framework’s been embedded into national constitutions
legislation around the world. It is like a shared lingua franca. It’s a shared global language. And I personally think it’s
gonna be almost impossible to replicate that level of
recognition in any new framework. And I think we give that
up at our own peril. So I think that’s really important. But there are a variety of features, global legitimacy, negotiation, human person at the center. Oh, broad spectrum of both
procedural and substantive rights that speak directly to many
of the existing concerns that people have about the
AI implications on society. Right to privacy speaks to digitization of everything, mass surveillance. Freedom of expression speaks
to algorithmic curation of information and how
it effects what we see. The rights to equal protection and non-discrimination
goes right to the heart of bias and discrimination
being bedded into data. And then also the scrutability challenges related to fair process. Democratic accountability. The rule of law and the right to remedy. Those are concepts all embedded. And so I think that’s a
really substantial starting place for analysis and the combination of being globally recognized and having that framework is really important. The last feature is that the
existing framework speaks well, not just to the normative principals, but to the governance framework. So the rights in here in the human person, the governments have
the primary obligation to protect the rights in society, both on the liberty and security side. And in a relatively new development in the human rights
framework private sector has the responsibility
to respect human rights, to do due diligence processes to make sure that the effects of their products and services is not harmful to human rights. And then if they find those
harms, to remedy them. That’s the UN Guiding Principles on Business and Human Rights. And that combination of obligations, responsibilities does
a lot of work for us. Much more, I would say,
than all these new ethical principals which are great, and they bring AI specific insights. But they don’t really necessarily spell out roles and responsibilities. The last thing, I mean I could say so much more about governance innovation that has already happened. I’m gonna make a simple point here. The disruption of
society that has happened in the fourth industrial revolution, software eating the world, the whole thing has actually brought some
interesting governance innovation pre AI, or
pre thinking about AI. I’ll put it that way. In the original global
internet governance space, there was this move to global
multi stakeholder governance because there was a recognition that the technology
community was essential and that governments would be clueless. And the basic idea there was to create an open interoperable internet. And it was really largely
about the technical standards and these software protocols. But it was about
governance of the internet at the hardware layers, at
the architectural layer. Then with sort of the rise of social media and the information realm,
we went to governance on the internet where
there was a recognition that we needed to be governing what happens on the internet. I will mention at the UN
Human Rights Council 2012, June 1st UN resolution
on internet freedom, simply laying down this
idea that human rights have to be protected online as offline. And that was a really simple but big move and kind of foundational
to this whole idea. With digitization and everything, we’ve moved into sort of
thinking about how to protect privacy, data regulation,
but it’s obviously so much more needs to be done there. It’s so foundational. But at least people have
their heads around it. And I would say, the
simple idea I have here is that we need the same
move in the AI realm, as we had in the global
internet governance realm. Which is simply, human
rights have to be applied and protected and used
as a basis of analysis of AI driven societies. It’s the same thing. And that’s kind of my bottom line. Last thing I will quickly mention and we can talk about these later are these are just some little smattering of places where I see
leadership in this space. So on the government side, the Australian Human Rights Commissioner is doing some of the best
cutting edge analysis of how to bring a human rights lens to government procurement and
government reliance on AI. World Economic Forum and the business and social responsibility
did a really good white paper on responsible use of technology that brings together
all of the new insights from these ethical principals
that have popped up, like something like 260
different sets of principals and tries to marry it and merge
it with human rights frame. And I think that’s a really
constructive approach. OHCHR, the Office of the High
Commissioner of Human Rights is doing a B-tech initiative. And the idea is to
elaborate more extensively on how to make the existing
UN guiding principals on business and human rights more relevant and speak more directly to the AI realm. And then several private sector companies are leading as well in terms of bringing a human rights lens to what they do. And here, I’d say the
move toward human rights impact assessments by the private sector in the AI realm is really important. That’s all. – Thank you, Eileen. – So I’m interested in studying failure or actually failures. Failures of government. Failures of civil society organizations. Failures of these institutions or systems through which we as people in society are supposed to function. Failures of the contracts
we have between government, between the civil society organizations and the people they’re
supposed to be serving. And so when I think about human rights and particularly the
failure of human rights, with respect to technology
and governance of data, I think of failure. Because for one thing,
since 1945 in the Charter and then 48 in the Declaration, I think about what had to
advance for us to even think about human rights that
would be in any way inclusive of all of us even sitting in this room. I also think about lack of enforcement. I think of selectivity of enforcement. I think of how we define
human and humanity and who that excludes, or
who that has traditionally excluded or ignored or neglected. And I think of the lack of sovereignty that certain communities
have over their data, over who’s using data about
them, or them in the data. And so when I think about the forums that I’m interesting in, I’m looking at what I’m calling
community as technology. And that is, if we define technology as the use of scientific
knowledge for practical purposes, then there are communities who said, we have historically faced
these failures of government, failures of civil society, failures in the societal institutions, so we are going to form mutual
aid societies, so to speak. We are going to form
grass roots organizations, or just people themselves
who have committed themselves to being infrastructure. So we think of people as infrastructure. Simone’s a good reading for
any of you students who care. (laughing) So we think of people as infrastructure. So a couple of examples
of that in Detroit. Detroit historically has
been panned as a failed democratic enclave full of black people, full of poor people,
full of ghetto people, full of whatever you wanna call it. But if we’re being honest, Detroit has been neglected, right? And the people have been forgotten. But people haven’t forgotten themselves. And so there is a lot of
technological innovation coming out of Detroit from
these community members, teaching each other how to
use their technology safely. Teaching each other about the technologies that are being deployed on them. And teaching each other the skills and sharing the experiences that they need to help each other out. One form of this is the DAC, or the decentralized autonomous community. So people are forming these communities, they’re planning them out and see what can be done with black chain. Can we create a token that we can share, that we can trade, that
we can use for ourselves, in spite of all the crap that
is deployed on us, right? Another kind of mutual
aid, mutual assistance, mutual obligation kind
of form or function, I’m thinking of Los Angeles. There’s various
communities in Los Angeles, various organizations. I’m thinking specifically of
Stop LAPD Spying for example. Which has been very, very instrumental in going after police and government and corporate use of these
spy or emergent technologies that are basically performing the function of spying on
citizens and non-citizens as well. I also think of Oakland. We can see in the past year the outcomes of a concerted campaign to get the government in Oakland to ban facial recognition technology. So we have these mutual aid,
mutual assistance organizations that are existing, that are emerging, that are sustaining themselves. With regard to, what are we
gonna do with technology? How are we going to govern ourselves? How do we get to a place
to attempt to remedy the continued failures of
these massive institutions we have in society. Now what needs to happen though, is with every movement for liberation, with every movement for change, there has to be law,
there has to be regulation to accompany it and that regulation, that law has to be enforced. Bottom line, right? So, I have under the First
Amendment to the United States, I have the right to freedom
of speech, supposedly, right? But that has to be enforced. And sometimes it has to
be enforced by me suing a member of government or
a government institution. But there has to be
some kind of enforcement to make sure that my rights,
my privileges are respected. For this space that we’re in, there has to be comprehensive legislation. And not just legislation
aimed at technology, it has be legislation aimed at like, fulfilling those
disparities that we’ve had in the society for such a long time, that we know we need to fix. Gender equity. Equality. Also privacy and data governance
comprehensive legislation. But we have those things that
our tech is just amplifying. And so legislation is necessary. But I’m heartened because
people always fill in the gap where there is failure. But people shouldn’t have to, and people cannot be the infrastructure by which all failure is remedied. They are remedies. We are remedies. But we can’t remedy the entire sickness. And so, that’s what I study. – Fantastic, thank you. Terah Lyons, from the Partnership on AI is going to talk next. – Thank you. Thanks for organizing this, Lucy. And I’m excited that you’re
all in the room with us today. I’m gonna talk for most of time about the partnership itself. It’s a really interesting
sort of case study in institutional models associated with what we are talking about today. So I’m gonna do as much a deep dive in five minutes as I can. And I’m happy to answer
more questions later. So we are, as Lucy briefly
mentioned previously, we are a global
multi-stakeholder organization. We’re a 501(c)(3) non-profit, based here-ish in San Francisco. But work with 100 organizations globally. Ranging from industries to civil society and advocacy organizations
and academic institutions. And we were originally founded by a group of senior AI research scientists, at some of the largest
technology companies working in the AI space right now. They included Amazon, Apple, Facebook, IBM, Google and Microsoft. And as founding members of our board, which is constructed in
a really interesting way, we also had them joined by a
six non-profit organizations, or institutional representatives. So the ACLU sits on our board, alongside those six companies. MacArthur Foundation, policy languages at Oakland based racial based
racial equity organization and several other
academic representatives. And in the nature of the organization is such that our goals are
really to attempt to bridge theory and practice in developing a body of evidence that is meant to inform the responsible development and deployment of AI technologies. And the audience for
that work is principally those institutions that
hold the greatest power in achieving impact at scale right now which are AI developers essentially. So commercial technology companies and industrial research laboratories that are buying large investing the most in developing these technologies and having the biggest impact
right now on the ecosystem. So we’re really focused
at the practice level and really hoping to try to generate the type of research and best practices, upon which organizations,
like technology companies and others really can start to shift their behavior in meaningful ways. And also, eventually to inform a body of smart policy making regulation and law that is meant to support
the long term accountability structures that are necessary, in order to really have
effective technology governance. So that’s sort of the high level overview. We do that in a variety of different ways. We have a research organization. I should also mention that
we’re a holy independent entity. So, excuse me, sorry I’m
recovering from a cold. We function completely independently from the founding
companies that I mentioned. So even though they are
represented on our board, we have several governance
mechanisms in place, such that our decision
making as an institution is air gapped from their influence. So we’ve thought pretty
carefully about that, and I’m happy to talk more
about that later as well. And so, we make decisions
essentially as a staff and secretary working with
the hundred institutions that are in our network. And the research that we conduct is really in many ways
speaks truth to power, to the organizations that founded us, in an attempt to positively
influence their behavior. So a couple of notes about opportunities and challenges because I
think that is an important set of topics to sort of foray into. One of the big focus areas of us right now is in what we call capacity building. And essentially it is the idea that as a multi-stakeholder organization and there have been many
of them in the history of technology governance. Not so many of them in the history of AI. So I think we are probably the first, as we were founded in 2016, and almost everything we’re
doing is very experimental. But one of the lessons that we
learned from our predecessors and other very wise people
working in this space is that you cannot call
civil society organizations to the table to participate
in work of the sort that we are facilitating
without meaningfully facilitating and resourcing
their participation. So it seems a very basic
point when presented that way, but it’s really, really
crucial and gradient to effective inclusive
multi-stakeholder deliberations. So we do that by both
providing financial resourcing to very practically
speaking to organizations that are within our
network that do not have the resources to effectively
participate via time buy outs for researchers associated
with academic institutions and affiliates of
non-profits organizations. We provide travel,
stipends and reimbursements for affiliates of those
types of institutions, and we actively pay for
their participation, what we call experiential experts. Which are really really crucial voices in the types of discussions
that we’re facilitating around policy recommendations especially, but certainly also in recommendations that get directly to the practice
of technology development. So that’s a piece of our puzzle and we are thinking pretty
deeply about much deeper ways in which to do more of that work. There’s a huge cannon of literature in the field and a ton of practitioners that have a lot of experience, who’ve been supporting
that project for us. But it is an ongoing challenge, especially for technology policy, because the tech industry
itself is horrifying homogenous. And so that is a principal
challenge that needs to be addressed in any
government architecture, sorry governance architecture, that’s trying to approach these
really important questions. And the opportunities I
think just to sort of turn to a more optimistic
lens for a few moments. I think, Eileen mentioned in your remarks, you alluded to this sort of gap between the technology industry, or practitioners and the capacity the government
has to really meaningfully gravel with what smart
regulation and policy looks like. And I think in many ways,
the Partnership on AI was born of an interest in
meaningfully closing that gap because there is a huge dearth
of that type of expertise in government and it’s not just you know, I have experience personally working in the US Government context which really struggles
with this challenge. But it is a challenge that
governments everywhere struggle with all over the world. And in talking to people who work in multilateral institutions, or in state governments,
federal governments, or governments otherwise, it is always one of the first questions we
get asked as an organization. What can we do essentially
to more effectively understand what is happening in a field that is really challenging to understand from the outside looking in. And especially, when you’re
starting from a place of extremely low capacity,
which most governments are. So to the extent that we
can, as a body of experts, spanning all sorts of
different disciplines and institutional perspectives, really inform some of the question asking that’s happening in the
public sector right now, I think that’s really,
really meaningful role for organizations like
the Partnership on AI and other efforts in this space to play. And so we’re focused
in part on doing that. And also in trying to bridge inter sort of sector gaps as well, because there are similar
capacity challenges in understanding levels of the field. And especially the
technical AI research field, between civil society and the extremely well-resourced technology,
commercial technology sector. So we’re very interested in making sure that that type of capacity
challenge is also addressed. And I’m gonna leave it
there because I could talk for much longer about all
sorts of things related to PAI and it’s work but happy to
continue the conversation later. – Thank you, and finally Sean
McDonald from Digital Public. – So my name’s Sean McDonald. I work for an organization
called Digital Public, which has now been said several times. So thank you for your patience. Basically in 2012, I took a
product that had a footprint in 175 countries from a
desktop product into the cloud. And we were a non-profit at the time. We were very well-meaning organization and we’re looking for a conversation and actually really more than that, we were looking for guidance about how we should architect data. And we would sort of assume
that we were a lot further along in the conversation. And where we ended up getting to was, finding that so much of
the digital architecture that underpins our rights is actually this sort of supply chain
of organizations, right. For any of you who have ever built a run in application
that lives on top of AWS and then tried to make
a promise to your users that you would definitely
delete their data upon request, you have probably chased a
rabbit hole into figuring out exactly what that
means and how many layers of abstraction your agency
within your contract with AWS actually goes. And so what we ended up finding is that we’re operating in this ecosystem that wasn’t particularly well-aligned with keeping individual promises, right. And that companies as structures, in and of themselves are actually also not designed to keep promises. They are structurally
designed to build value and distributor contained-liability. It’s what they’re for. You shouldn’t pretend that
they’re something else. But that obviously means
that we have this set of challenges around how easy it is to manipulate agents, right. And so there’s someone
named Chris Taggart, who runs an organization
called OpenCorporates, which is, I think, the
world’s largest database on beneficial ownership information. He wrote a piece called
“Fireflies and Algorithms” and it’s basically, you
can spin up companies in Delaware inside of half an hour, no matter where you are
in the world, right, and create that as a company
that then is able to go and make promises on the internet to people about data rights. When incorporation becomes
a fungible structure, when it becomes a
strategic surface, right, we have a fundamental problem because most of our laws based on, most of our protections in digital spaces are based on agent based
accountability, right? So we hold companies accountable. We hold directors accountable. We may hold lawyers accountable. But where you start to see slippage and in corporation infrastructure, which is fundamentally sort
of the DNA of markets, right? You start to create very
real and very concerning problems about how as Jasmine was saying, we might enforce our digital rights. We’re making it significantly
more asymmetrical and significantly more complex to try and challenge and preserve the promises that were made
about how we’ll be treated in various digital systems. So that was the first kind of problem that led me to data governance. And my first sort of step
into that was looking at what are the legal forms, what are the enforceable structures that we use to protect common goods? And actually there’s a pretty good one and it is named deceptively simply. It’s named a trust, right? We have legal infrastructure, which is structurally designed
to architect accountability in ways that enable us to
more effectively govern common assets for shared
benefiter for the public interest. And so the idea of
digital trust is actually what brought me to this
Digital Civil Society Lab and it is very nice that Lucy
still considers me a fellow. And to understand that that essentially makes this the fellowship California. You can come but you can never leave. (audience laughing) – That’s how it works. – And so we started. It’s a pleasure to be back. We started kind of looking at, all right we have an
interesting incorporation model. Three, four years ago we were doing this. We were sort of blessed by a total lack of interest from the powers that be. And what we found was that
there was a huge appetite for solving this structure of problem. But that there was really,
really stark disagreement on what good data governance meant. So for those of you with
a background in law, there are lots of different
legal theories of standing. There are lots of types of
relationships you can have, or wrongs that can be visited upon you, or values that you can create that give you an opportunity
to defend yourself in a court of law. But all of those are different theories of why we might have acceded
the table of data governance. And digitization unfortunately is not making that terribly simpler. For those of you encountered
Professor Arvind Narayanan’s 21 models of fairness and
machine learning lecture, that he’s now modeled 21
different approaches to equity. 21 different political philosophies that you can manifest in code. And so we have this compounding of complexity in how we might design data governance systems. I’m nerdily sort of a
Chance the Rapper fan and I was listening to this
song that he has just put out. And he’s got this lyric that says that, it’s like reinventing the wheel
just to fall asleep at it. And I think that that’s
sort of the kind of profound observation about where we are
with data governance, right. We are definitely in the space where we’re reinventing something, right. We’re rebuilding structures for decision making and new surfaces. But we haven’t fixed the architectural and organizational problem. The DNA problem that replicates
these kinds of injustices, these lack of inclusion and
this really stark difficulty around accountability and
pursuing and enforcing our rights. And so, where we are now
and where Digital Public has gotten to is that
we’ve broken our work into three pieces which
I think will resonate with different work that people are doing around the
table and in the room. But it’s really around
trying to help build out a digital political science. We don’t have a meaningful
comparative taxonomy of data governance
approaches at the moment. So when say Facebook does something, generally speaking the public reaction is however people feel about Facebook. Kind of regardless of, I
mean not totally regardless, but heavily coloring what
the actual structure is. Same thing with companies or organizations that you may feel differently about. And so we are in this
stage where we have lots and lots of experiments. We have lots of fodder,
we have lots of pilots of people who are building
new data governance’s, mechanisms under different
political philosophies, different legal philosophies, with different organizational
and market based outputs. But we really need to invest
in these common taxonomies that will enable us to separate them from political opportunism and be able to speak to
it as though it’s a field. And I think until we’re
getting to that space, it becomes much, much
harder to have a fair, for lack of a better term, conversation about what data
governance should look like. And so our work is a combination of working with clients
to help try and build and wrestle with the cultural
and practical challenges involved in setting these systems up. Building a very clear and
direct and intentional pipeline back to researchers. So that practice is leading
a lot of the research that happens in the space. So the opportunities for
research in the space. And then lastly working with policy makers on the enabling environment, right? Because we are also in a place where the liabilities
assume for experimenting, even with the best of intentions are all very direct and very personal. And there is a shared
normative benefit, I think, in getting this right. And so finding ways that we
can create graduated paths to market, graduated paths
to scale and governance and that speak to a clear understanding of why we’re making decisions and who has a seat at the table in an individual circumstance, I think the closer we’ll get
to better data governance and better AI associations generally. – Great, thank you. So here’s how I’d like how to structure the remaining hour if it makes sense. You’ve heard a lot of examples. I tried to capture what
we went through just in the conversations in terms
of very different domains of law or framing that we might start from to think about what kind of associational structures might work for
managing digital data in AI. And we’ve also had two, at
least two, I lost count, but very clear statements
of starting principals. One about beginning with a recognition that the systems we already have have proven to be
inherently discriminatory, exclusive, we’ve got laws
that aren’t enforced. So there’s a lot about what
we currently have in place that’s not working for
a whole lot of people. Which may lead then into a principal that would say start from justice, start from inclusivity,
start from human dignity. Which takes me to the other proposition that was put out which is
that the one global framework we have that’s in place that
addresses both substance and procedure is the global
human rights framework. It’s got a lot of problems. But it’s a framework that
exists that might guide us. You can challenge both of
those two sets of assumptions. But that’s what I heard. And then a set of ways
that people are actually trying to get some work done, while also building the thing that they’re gonna do the work in. I mean this is really a case of building the organizations while
we also do the work. So what I like to do is use
the next 20 minutes or so for you to ask all the
questions you have of the group, to make sure other examples get out there, and any questions you have get answered. Then, I’d like to turn
our collective attention to this idea of identifying the problem. The common thing all these
efforts are working toward. Things, there’s lots of
them and add your thoughts. As well as hear from you
about any other models, experiments, structures, strategies that you’re familiar with, so that we can continue to
build out our understanding of what’s already underway. And then end with the focus
on well what researchable questions can we package up nicely and leave here at Stanford and
well all the other research in the room to take forward. So let me give you time to
ask questions of the panel. Yes? And I’ll just, one two. And I don’t know anybodies name, even though you’ve got name tags. So introduce yourself if you care to. – Hi. – Yeah, let me get this. – Thank you very much. Suzette Ship, I work here at Stanford. And I am working as a side
project on a health equity model. Especially for people of color to address both health
and healthcare outcomes. It’s in a very nascent stage. So I really wanted to ask Jasmine because you said of course, black people, other people of color, we
shouldn’t have to fix everything, but of course that’s what we do. That’s what we’ve been
doing since we got here. So my point is, and I’d like
to know what you all think. I think we should just go
ahead and formalize that, because we’re doing the work anyway. So I’m just trying to figure
out how to formalize that, how we get credit for what we’re doing and have everybody, any time
anything has to do with us, it should go through our
systems that we’ve created. – Well first of all, thank you for doing the
work that you’re doing. So I think it’s really, really important. Particularly in the health space. And we can think of like health data as some of the most sensitive, some of the most, having
some of the hugest impacts or influences on life outcomes obviously. So as far as making
sure we get the credit, we have to enforce it, right? But that’s us too, right? And so if you’re on any
of the social media, I’m sure you see that people are enforcing the fact that we’re not doing labor to not be compensated in some kind of way related to that labor. Whether it’s credit. You need to cite so and so. You need to make sure you
tag them for their idea, for their intellectual
property, so to speak, right? But that’s still a, we’re self-enforcing, where community is technology, we are reforming that infrastructure. I think it has to become a cultural thing, where people recognize
that somebody did this and their work is valuable. And you can’t just build off of their work without acknowledging that
there was work before. So I think that’s really important. And yeah, we are doing this work. At the same time, we shouldn’t have to do. And we can’t do all of the work, right? So hopefully we build allies and co-conspirators
actually to do this work. I think that what you’re finding is a lot of people, with
students and grad students and people going into the field are really interested on
how to make society better. How do we change life? And so they’re doing this
interdisciplinary work focused on like how do we remedy this stuff in various sectors? And I think that’s heartening, right? Because they’re doing translational work, not just the academic
but the, how do connect with the people acting on the ground. And I think that’s heartening to see. And I think also seeing that universities and other organizations are
not funding that kind of work and seeing the need to
have this public private, or public public-er relationships, yeah. – Good question. Any other thoughts? Okay. – So I’m Thyron Joe Twanny. I’m a DCF fellow at Stanford. I’m a little bit confused
because today I heard from a couple of you that
there’s a gap for the government. There are gaps in the government. And yesterday, DJ Patil, the former chief data scientist officer in the government under
Obama said very explicitly that in fact the government
has made many attempts to engage with the private sector, but people in the private
sector don’t turn up. And so therefore I’m hearing two things, so I’m a bit confused. Now anecdotally, I will say that whatever I heard in the last day and a half here in the symposium, whenever
I heard from the corporate or the technology companies reps, including Swartz, including Schmidt, I didn’t see any desire
at all to actually engage. So I would. And right after, if you bear with me,
right after yesterday, I went to an AI class at Stanford, which I’m a student in. And there was a former YouTube CEO, who’s now doing something else, came in to talk to the class. And he talked about anecdotes really do really bring this to life. Yeah. He talked about basically
if the law does not disallow me to do something, why should I? So in other instance of a lack of desire. So if you can bridge that gap between is a private sector not
coming to the table, or is the government not equipped? – I’m happy to help alleviate
some of your confusion. These comments won’t be comprehensive. But I worked with DJ
Patil in the CTO’s office in the last administration. And I understand what he is
describing, that dynamic. But I think the most present dynamic is actually the lack of
sustained deep expertise that exist in government. It’s one thing to host a meeting and expect people from
companies to show up. But it’s another thing
entirely for government to actually have the depth
of capacity necessary and the understanding
of the field necessary in order to architect multi
year long policy processes that result in smart
outcomes for citizens. And so that is what I think we really see the government hasn’t resolved. And the last administration
tried to bridge that gap with attempts like
the US digital service. And the Presidential
Innovation Fellowship program and 18F, several other programs
that really brought talent from the technology industry directly and from the research
sector into government for what we essentially
called tours of duty. Which was a model adapted from the defense department actually in trying to get people
to take sabbaticals from their day jobs and come
and serve their country. And it worked out pretty successfully. I mean there was a huge difference
between what had happened to government previous to that point and where we currently are given the programs and what they’ve brought essentially to the public sector. And some of that was
also modeled on programs that the UK Government, for instance, they were the first to architect
a digital service program. And several other global governments have experimented with similar programs. So there’s definitely a
starting place and a foundation. I don’t wanna paint the
picture as completely bleak. But it does remain the case that that sort of sustained deep engagement, like I mentioned, is really, really what we’re missing right now. And the other thing that I’ll say is that you know I think
to your second point, there’s somewhat a hesitancy I think, to engage with certain
types of governments, depending on the political
and cultural moment in which we’re situated because of a lack of trust in those institutions. And I would say we are
certainly finding ourselves in one of those moments right now. I’ll stop there. – Anyone else? – So I just want to
underscore two big things. One is we cannot overstate the extent to which our society has
been radically, radically transformed with
digitization of everything. And people are genuinely still getting their heads around it. The people you mentioned,
I won’t rename them, but I will say almost
there’s a generational passing of the baton. And I think several of those people were trained in an era where if you were a technologist, you were a technologist. You didn’t have to take philosophy, or normative courses, humanist
courses, human rights. No exposure to human rights. I see a real sea change in this idea that cross sector policy development, as well as cross disciplinary research. I see a genuineness to the belief that that is essential to better policy and higher quality research. And that multi-stakeholder process that’s global is where the
governance realm is moving. So I don’t think that those
people are necessarily representative of where we are today. But I will not undercut the idea that radical transformation of society and still a lot of conceptual confusion that people are trying to
get their heads around. There’s an urgency to it though. As everybody said, we have
to get our heads around it. But it’s a hard problem. – Yeah sorry, I think that the
idea that they’re not showing up is really interesting
’cause they’re the largest lobbyists in the country. They are one and two with a bullet. So, I think that they’re not
showing up the way that– – They’re showing up
through the back door. – I think that’s the front door, friend. (laughing) I don’t think they’re making
any bones about it, right? And I think the other
thing that they’ve done is spend a very large internal quasi-governance structures themselves. So the fact that they’re not engaging on the things that other
parties want them to engage on. A lot of times I think we bring education to incentives, right? And I think that if what we wanna do is change the incentives,
then we have to stop treating this as like it’s a good will problem and start engaging in the marketplace. You can do a lot with insurance and strategic impact litigation. You can do a lot to rebalance
the cost of bad behavior and at the international stage where there’s really interesting and potentially quite precarious moment where the international
competition around market capture is driving a normative race
that we as individuals, as consumers, as people
have to live in the society should be quite concerned about. There’s a great book called,
“Rules Without Rights” which is a history of the labor and environmental regulatory spaces. And it does a very interesting job of looking at this is what it means when we talk about rules that
don’t have implementation infrastructure and this is
what it looks like when we do and you’ll be very surprised to learn that enforcement matters. I think the narrative that
people aren’t showing up is one. In the same way that the
narrative that the law can’t keep up is equally absurd. Like the law can keep up at whatever pace we bring claims through it. But people are being
forced into arbitration and people are being forced out of court. I’d just say that I
think that there’s a lot of participation in the mechanisms. But if we’re not getting the character or the type of presentation that we want, then we have to realign the incentives about how you get there. – I’m Renell, hi, I’m a student in
international policy here. And I wanted to stay on
that moment of what are, I’ve been thinking about
different analogies. Eileen has brought up the international human rights law context. You just brought up liability. And in the international context, where many of the enforcement mechanisms either never existed or have
been stripped of their power. So there’s no torte liability, there’s no intermediary liability for the things that platforms post. Are there other places that we can look to for the motivations that
would bring a company that doesn’t have any legal responsibly for putting out into to the world a product that is causing harm. Especially in places where
they haven’t resourced the safety mechanisms the way that they have in our country. What are the motivations
that you could look to? I mean, is there something
under trade policy or product safety regulation
that we can think about, especially in the deployment
of AI technologies that could be dangerous in places that we don’t have safety
mechanisms in place? – Two quick points. I wanna join the first
part of your question with something Jasmine said. Which I have to admit,
the human rights framework has always lacked
enforcement mechanisms, okay? And if governments who have the obligation to protect liberty and security don’t do it, it doesn’t happen. I don’t think that’s a good enough reason to walk away from the framework, ’cause I genuinely believe it’s better than anything we will come up with at this geopolitical moment. But I will acknowledge the
right to remedy is in there. It’s just not executed upon the failure to live up to the promise. It’s tough in a global system when you’re depending on nation states. I wanna just mention one idea that I’ve heard as it relates
to the disinformation realm. Rather than governing through a lens of content based restrictions, which most of the experiments
in Europe have been about. NetzDG expanding the categories of speech that are restricted in ways that violate free expression. Outsourcing government function. Sort of a rule of law kind of violation. By outsourcing the responsibility to assess criminality or
outsourcing censorship which is also violation
of free expression, as well as rule of law. So instead of that model,
which was the first instinct of governments, pushing on two things. One is the concept of transparency as a first step of civic education which sounds kind of boring but at the heart of the matter, if the public does not understand how algorithmic societies work, civic space is over in
a digitized society. We are all, our autonomy and
agency is going to be lost if we don’t understand the basics. So I think that’s much more potent than it sounds at first blush. And the second one relates
to the private sector, is this idea, one of you, was it you? Talking about promises? Both of you really. Consumer fraud. Not keeping promises. I think there’s a lot of potential there. On the advertising front
and pocketing money for fraudulent advertising
that’s disrupting our democracy. But failure to do what you say will do, in terms of protecting data. So I think there’s a
lot to work with there that doesn’t get into
content-based restrictions that undermine free expression. – So I just think when
you have a framework, which in theory is really good. Freedom of expression,
freedom of mobility, a freedom of association. All these good things in it,
but you have no enforcement mechanisms and people who are signatories could pick and choose which
ones we wanna do today. It’s no better than the ethics that we’re talking about now. So Google, we know had, don’t
be evil for a long time. And so like, how are we interpreting evil and then they jet us in that phrase a couple years ago, right? So when interpretation is the issue and enforcement is the issue, then we are still in the same spot that we don’t wanna be in. But I think to your point though, where do we look for possibilities? I’ve been thinking about this a lot. And I’ve been thinking about where in both the public and private sector, what is the rate of error
that is good enough? What is the allowable rate of error? If I’m doing some survey and
I’m doing some experimental methods and I get over
.05, then you know what? Something’s wrong with my data. But if I deployed algorithmic
system, and you know what? It’s not recognizing dark women. But it’s pretty good, right? ‘Cause we can find lost children. Like, give me a freaking
break right there. So over 30%. Are we going over .05? Or can we lower it to what we use for medical research, the .001? What is the acceptable
rate of error for humans? And people are dying. People are dying. Who gets to get healthcare first? People are dying. People are dying in prison. People’s life expectancy is lowered. Like what is the acceptable rate? And is it different for
different sectors of society? So if I wanted to look at
strict liability law in the US. If I wanted to look at
attractive nuisance laws. If I wanted to look at even
FTC regulatory frameworks for deception and unfairness. What could possibly deceive somebody? What could possibly hurt somebody that they couldn’t
protect themselves from? We have these things in law. We just don’t really enforce
them with the technologies that we’re allowing to emerge
and be deployed upon us. So it’s not necessarily that we don’t have all the laws, it’s that
we are selective again with enforcing them on certain systems. – One thing that’s always really struck me about this conversation particularly is that the human rights
framework says the words that we want a lot of data
protection style law to say, but don’t enforce them. And property and commercial law ecosystems don’t really say the things
that we want them to say, but they work way better, right. Commercial law infrastructure
processes higher volumes of lower value, higher complexity cases than any other kind of legal
infrastructure in the world. And that is in many ways
the kind of problems that we’re gonna end up needing to solve in a lot of digital ecosystems. So I think that there’s this I always make a lot of friends by talking
about commercial law. But if you hear the things, the mechanisms that we’re talking about here, product liability, we’re talking about representational law, right? I mean, a whole lot would
change in the economies of submitting data services. If you were held legally
responsible for your margin of error, as a slander
or liable case, right? And that law exists. But there is a huge amount of marrying the law that we use to regulate or manage private relationships and privately made promises
with the normative goods that we have always expected governments to be able to uphold. And the pathway to market has
substantially changed, right? A company can go global
and launch immediately. And you were a government before, that company had to come
to you and start a business and get permits and
then seek your approval in a number of ways and then
they could get to your market. And now governments are
having to figure out how they ring fence their markets, and how they prevent products that are exploit it from getting there. The last thing I just wanna say
is one really, really simple point of intervention that
I am completely confused by why we have not done more of. Is requiring warranties, right. You buy a car, and then you use it, you drive it into the ocean. It turns out the car
maker is not responsible, because they told you it’s
only useful on the road, right? And you drive it into the ocean, despite their best intentions was not particularly foreseeable. We have somehow in digital ecosystems stopped requiring warranties. But the thing about requiring a warranty is it forces a manufacturer,
forces a product seller to test that product in
context to make proactive assertions about what it
can and should be used for. And then to channel liability
through that assertion. That’s an incredible way
to rebalance the RND costs and the enforcement costs
that currently take place in this ecosystem is
just one really normal commercial contracting mechanism. But it’s one that we don’t enforce in digital spaces and certainly could. – Great. There was a hand here, here and here. Asking you to remember
who to pass that mic to. – Hi, I’m Marta Smana. I work for the campaign
to stop killer robots. But my question is unrelated. Eileen, I was surprised
that you had Microsoft up on your list of champions under their human rights impact assessment. From what I’ve heard of my colleagues at the ACLU Microsoft weekend legislation, while Brad Smith publicly
talks about his concern over facial recognition
they weaken legislation at Washington State in their tech lobbing, just recently there was an
article that came up by, I think April Glosser, where she was talking about any vision and other facial recognition
company that operates in Palestine and essentially
has Microsoft investment and uses their services
to monitor Palestinians in the west bank, essentially
a captive population that can’t get away
from that surveillance. And also Microsoft just
won their new JEDI contract for $10 billion, which in and of itself is not nefarious but it’ll allow the Department of Defense to
achieve all of it’s AI dreams. And seeing as Microsoft
hasn’t proven itself to be a super great actor in the past and also as we’ve talked about already, the laws that exist are
not sufficient enough. Especially in my work when
it comes to fully autonomous weapons and military use of AI. A lot of the DOD policy is
non binding, it’s directives, it’s not legislated into law. So can you talk a little bit more about why Microsoft’s on that list? – Microsoft’s on that list
not for the reasons you said. I’m not trying to defend
any of those things. I am saying they have led
in the mechanism development of human rights impact assessments. And if you compare them
to other companies, they’ve not only embraced
the responsibility under the UN Guiding Principles, they are actually working
to apply those principals in systematic efforts of design and anticipating use of products and applications down the road. And so, in a world where
governments are failing and the private sector is having a much bigger effect on the
enjoyment of human rights, I do appreciate companies saying, we will embrace those principles. They’re not self-drafted. It’s not like we got ourselves
and our special group in a room and we decided the ones we like and imposed them on ourselves. They’re embracing this framework and they’re role modeling
what does it look like to try to do that in their processes. So that is what I like about them. – More discussion offline. (laughing) – Hi. My name is Ben Gangsky,
I’m a civic engagement designer and researcher
institute for the future. We heard a lot from y’all. First of all, thank you all. This has been informative and inspiring. We’ve heard a lot I think about raising the floor to protect against harms. But I’m very curious to your outlook on what raising the
ceiling might look like. Specifically Jasmine and
Sean with your engagement with Data Trust, I’m curious, how can those kinds of formations, or other kinds of new
institutional formations make for a greater enforcement of human rights frameworks and
new opportunities for people? – Yeah so I’m really, like I said, I’m really encouraged by
movements of various communities and groups to govern
themselves in various ways. Currently right now, there’s
quite a few indigenous groups who have said, if you wanna use our data, first of all if you
wanna talk to us, right, if you wanna do your research on us. But if you wanna use our data, then you need to come to our house, respect our house and follow
our rules and guidelines. So we have a rule and guideline structure with regard to how you may
or may not use our data. What our data is for,
how you should interpret. So we have a model, right. Not just data, we have a
model that you should use when interpreting the data. And so that’s encouraging, right? So you have people who
have said, you know what? We’re tired. And therefore, we’re creating
our own governance structures. With Data Trust, the same. With the DACs I talked
about, same type of thing. So groups have recognized, what
we have now is not working. The tech that’s being deployed on us, or being used, or how
we’re being researched, how we’re being surveilled
is having obviously long term detrimental effects on us. So what can we do? And people are coming
together to govern themselves. – Yeah, there’s that old
quote that it’s like, in a culture designed to
make you feel insecure, being yourself is a act
of political bravery. It’s sort of like that with data, right? In a age of learned helplessness
around digitization, this community organizing
and this self-possession and seizures of assets,
in terms of representing a community or set of interest
is a pretty radical act. And there’s a whole bunch of examples of people doing it in
these really exciting ways. We’re seeing natural resource
collectives come together to use their data to
negotiate with regulators, to negotiate with technology providers to level the playing field, right. We’re seeing large commercial actors come into the space and
recognize that the things that are gonna differentiate
them from the big five are building better relationships and extending more equitable
means of adjudication. ODR, Online Dispute Resolutions, a system from eBay, was an early example of how you might extend
governance mechanisms via technology even though the jurisdiction doesn’t require it. There’s a group called
the Light Collective, which is a patient advocacy group that is doing really inspiring work. The data for Black Lives Matter movement is doing really inspiring work. Both in the advocacy space. And you’re also seeing
that there have always been these informal networks of groups and often very powerful
actors building value that just didn’t have the
means of formalization. So we’re working with a group of medical researchers who
also have relationships with hospitals and commercial
interests and patients. And essentially building the
tables they can sit all around and make decisions together
in a structured way. So in terms of raising the ceiling, I think a lot of the
conversation gets focused on how do we change the behavior of the big five or the big 10, or companies that everybody
knows the name of. But in that focus, which is like choosing to go from tryouts to the Super Bowl, you’re losing almost all of the incredible games in the middle, right. And there are these huge, huge numbers of actors who are seizing
on the representation power that digitization could give them, or the constructive power
that they’re able to take by building their own data
products, or their own systems. And that I think it’s raising
the ceiling dramatically. – I can also serve as another example. As it pertains to AI specifically, I think we’re starting to see an interesting movement in
the activist scholar community around trying to operationalize
really, really interesting ideas have been generated
by one or several individuals trying to work out how to more effectively install values of fairness, transparency,
accountability, et cetera in AI technologies and tools. So one example from the
PAI context specifically is a project that we have called About ML. It’s an acronym that has too
long a name for me to memorize. And I always forget it. But if you’re interested, you can look at our website for more information. And the concept is essentially drawn from a body of literature that started emerging in AI research field
in the last five years or so around transparency and
how you actually effectively produce transparency in situ essentially in product development contexts. And the ideas essentially to basically a group of scholars from Academia and from industrial research laboratories started producing papers on this concept around the same period. And a lot of the idea was situated around how we can produce essentially the equivalent of a nutrition label for data sets and models in AI. So that there’s more public
and institutional transparency and accountability around
what the limitations of the data or the models might include. What data providence entails
for any given data set and so on and so forth. And we actually have
modeled based on an IETF, or a W3C standards process. But we’re upended it so that it includes much more community input and is a process that is not just driven by industry. It’s actually driven by effected
communities essentially, through a program partnership that we’ve developed
with a program called, Diverse Voices at the University of Washington tech policy lab. And the idea is to produce a standard for industry and for
the research community to actually be adopted hopefully
by the end of this process because of the involvement
of these institutions from its outset. But that type of bridging
the academic literature into practice is also
something we’re starting to see much more movement in. And I think that’s also
a really good opportunity to insight a race to the top dynamic, aside from just raising
floors your suggest. – Comment here and then it’ll be. – My name is Alca Roy. I come from a human rights background and also in technology and art. And one of the things from
a conversation yesterday that was happening. I had worked on woman’s rights
as human rights campaign. And so I also understand the difficulty of enforcing good ideas. So, I really applaud
you for anchoring that. I was thinking maybe we should start a user rights as human rights. I say we can start thinking
about our users as humans. I also work on the– – That’s a great idea. What a concept. – I know it sounds
really, sometimes you have to say the obvious. So we are starting a
responsible innovation project. If anyone’s interested. I also work on open source on trusted AI. But who pays my bills is a company. I work for a corporation
and innovation center and this is all stuff I
trying to do on the side. But I wanted to list a few things that maybe we can talk
about in the later part. One and many of you have talked about it, so I just wanted to get
inside your brains right now. But we run into challenges of having a data governance ethical model. That I know you talked
about standardization, but literally when we go out, I have have 100 really smart
engineers we can’t figure out. Not only on what data to include, but inclusive big data versus small data. Inclusivity versus privacy. How did you collect the data? Who’s permission did you have? What are you gonna do? And do they know what
are you gonna do with it five years from now? Because once you’ve given
it, you can give it away. So that’s one thing. The other thing I’m interested
in is how do you take physical rights of things
or safety regulations with digital regulations
with human regulations. ‘Cause that’s AI, right? When you create autonomous systems, safety requirements for humans
are going to now introject over safety requirements for goods. Because now machines are going
to take autonomous actions, so whatever your requirements
may be for human activity. Who’s liable when
somebody has an accident? Now machines are making
not only decisions, not only are they product, but
they’re actually autonomous. So that’s an interesting overlay that we’re looking at
and be interesting to see what you’ve thought about. And the last question I would say, what we run into and you’ve
talked about structural funding, universities are funded by corporations. Non-profits are funded by corporations. If you look at motivation and influence, at the end after studying
everything, it’s follow the money. And it’s really hard to find and create a structure that will not
only have enforcement laws, but that you can actually enforce. There’s no many laws I know
that people are breaking. But the ability to fight that. And having the resources to fight that when other people all kind
of colluded in the back, like let it go, let it go. How do we change that culture? Sorry, I know it was long. – Well if we had the answers,
we’d be in good shape. Did anyone wanna respond? – I have one tiny piece, which is simply on the user rights thing. It’s very familiar to
the community working on Data Trust, et cetera. The human rights community,
it’s not that familiar. And I think elevating it, that’s where the light bulb goes off. Like woman’s rights or
human rights, user rights. But I would add the big problem here, it’s not just users of
products and services, it’s the larger societies. – Agree individual collective. But I’m just saying how you
(coughing drowns out speaker). – A lot of times it’s not the consumer, it’s not the user, it’s everybody else. And we have to think a
way to talk about that. – Well and I just wanna put a note on this because I think this
challenge in understanding a collective rights is
actually where the progress that’s been made in bringing
attention to digital rights largely falls flat because
they’re so individualistic that we’ve cornered
ourselves from being able to. I’m looking for the associational
structures of the future. It’s really what’s going to take action on those collective rights. – The only thing that would
be, is who is collective? Because when people start
defining collective, a lot of people get left behind. – Well unless you, right– – I’m just saying. – No I completely agree
and the opportunity then is in the collective defining itself. Which I think is a little bit
of what Jasmine is finding. And so one question for the larger group is we think about all of the ideas that have been put on the board, is how do those self-organized
sovereign collectives, which have a set of internal facing rules actually get recognized and
have their rights enforced by the larger and legal system, right. So that they’re not
just a bunch of one off. – Is that a two finger?
– Efforts. I had a– – I have a quick– – Question here.
– Before we move on. – Did you have your hand up? I’m just tryna keep track of the queues. Rebecca are you right on this point or? Okay, jump right in and
then we’ll come back. – I just had one quick response or even a bookmark actually to
place in your comment about money, power and accountability, because I think it’s really,
really, really important. I don’t think if we spend all two hours on the session on this topic, we would run out of
things to say about that. But I do wanna just note
that I think one thing missing from this conversation is the role of private
philanthropy and helping understand what we need to be investing in. And this has been a long conversation in the PAI context and in
the lot of the civil society organizations that we work with everyday. But one thing that we have noticed in working with our philanthropic partners is that there is also
a deep lack of capacity in that sector to really
grapple effectively with the dynamics that we see unfolding and with technology governance questions. I think, like I said,
there’s a ton more to say, but I just wanna recognize it as a point. – So then I had Rebecca, and
then we’ll go back to you. – So just by a way of
background on collective rights. I run a project called,
One Thing Digital Rights. And we rank the world’s
largest internet mobile telecommunication
companies on their policies and practices that effect
users human rights, particularly free expression and privacy. But the current methodology is focusing primarily on individual rights. But we just developed a
set of draft indicators that we’re going to
integrate into the index and then we’re gonna pile
it in the coming months and do an initial pilot report that’s looking at
collective rights and harms. And the way we look at that
is through human rights risk center as we call them. What are the potential
harms to marginalize groups, minority groups highly at-risk groups and that’s the lens we
take to collective rights. So that might be useful in the discussion. The draft indicators and exclamation of what rights that they’re linked to, in terms of what kinds of policies companies ought to be having, in relation to AI and algorithms, as well as targeted advertising are tied to specific human rights norms. And that’s on our website
at rankingthedirects.org. There’s a link on the blog. So there’s indicators. But also some previous documents that go into create a
gap in the human rights (coughing drowns out speaker)
scenarios, the risk scenarios that we’re facing and so on. So happy to talk to people later about how we’re define in that stuff. – And thank you, thank
you for sharing that. – Can I ask a question back? – That’s what we’re
trying to generate though. Were those examples of other approaches. – But I do have a question. So I’m curious about how
the breakdown is between outcome based rights
versus procedural rights. And particularly just
thinking specifically about, do open ended data licenses, for example, play a determinative role? So for example, if I
can’t predict future harms because it’s an open ended
data license, does that– – Yeah we’re looking into things that we’re biting off what we can chew, in terms of what’s possible
to set standards for and compare and to actually
draw to a positive guide to. And so it’s not all
comprehensive on everything. But it’s a start, right. So if that answers your question at all. – Terah, did you want? – No, I’m just waiting. (laughing)
Thanks. – Ben Nighterman, University of Maryland. And also representing the US, the ACM’s Technical Policy
Committee which works on this. I’m just very charged up and energized by all the passionate
people and wonderful ideas I’ve heard in these two days. So that makes me optimistic. Of course, that optimism is tempered by the fact that so much of the negativity around here or the concerns and
dangers which is legitimate. So I wanna reflect on ways to go forward. And the suggestion of a
taxonomy of governance is I think is helpful here,
in that what I’m hearing is that a lot of devoted
people from the civic action NGO kind of independent groups here which are moderate size
of 30 or 100 people. It’s better than having
one, every one individual, every user try to fight the battle. So that level works okay. The other level that
we’ve got at the very top of let’s get the government to do it, is a really difficult thing. Being in Washington and
being on these policy things, it’s really hard to move that. When you do, you have
very powerful impacts, as Terah knows and it can work out. But I like to advocate or raise
or hear more about enlisting the existing powerful
intermediate organizations. I think hinting that by Sean’s comment that what about warranties? Well right, the software industry has a history of hold
harmless and deliver it as is. And that idea needs to be broken. The question is, how
are you gonna do that? So you’re partners in that
I think are going to be the professional societies challenge, the insurance companies, okay? The auditing companies. I think the KPMG, Deloittes, PWCs are going to be powerful forces. And they have a much
higher, stronger levers, bigger levers than the civic
action groups that are smaller. They also are durable. They’ve been in place. They’re respected just as you wisely draw on the existing human rights guidelines. I think that’s a durable
powerful structure. So I’m putting in place in this taxonomy the suggestion that the civic groups that we’re hearing from could, in addition to doing their own advocacy, they could engage, enlist, catalyze, provoke and push those other
institutions which have a long history and were
successful I would say, in the recent opioid pharmaceuticals in bringing the smoking
and tobacco industries and in other cases where
you want to prevent or counter harm that’s been put in place. And again the professional societies and the computing research
association, the ACM, and IEEE. IEEE’s ethically aligned
designed guidelines are really very nice, and we need to propagate those further. Another institution is the journalists and propagating these
ideas to reach the public, ultimately, we need the public’s support to make anything happen and
the journalist need to be. And I didn’t see journalists here. Maybe there were. But I couldn’t tell, or I couldn’t tell there was an engagement. So I think there’s a whole set of places. The enthusiasm for developing a new and focused group is great. Keep that but add to it
the leverage and power of existing institutions that
are respected and have access. – If I could just– – One more. – Oh sorry.
– Go ahead. – I mean no if you’re– – I just wanted to add
about I keep hearing the word autonomous and it’s
sort of a separate footnote, but I really hope that could
be put back into history as archaic as agents and
Google and giant brains. Because autonomous is just a generic. It’s either autonomous or not. Or it’s fully autonomous it was said. And there’s a rich, rich landscape of autonomy of ways to design it. And the design community is what really needs to come and play. I should say I’m speaking about
this tomorrow in gates 104, but that’s kind of I think as a way to go. But I’ll just come back
and say the central issue is for the political action
I think would be to engage with these middle size structures. – I just gonna offer, I’ve
been focusing a lot in my own research on the role of
supply chains recently. And that if you look at
sort of the way at WalMart, quote unquote greened it supply chain in the footprint that it had there, there have been a number of ways in which high-end or top of market buyers are really able to,
whether it’s transparency through agricultural supply chains, labor marketing through
manufacturing supply chains, those are another very important indust, sort of existing infrastructure in which power relationships
were needed out. And at the highest
level, large scale buyers interests are very often
aligned with market stability and good practice because it
reduces compliance burdens. And so you look at something like Libra, for example the folks who were out first for the institutional financial suppliers because they had actual
compliance footprints. Whereas the feel good
civil society actors, who were meant to be the ethical backbone of that project are still toughen it out. So just to offer that I think there are absolutely
additional infrastructures. But that there are also within the purchasing decisions
that are currently made and the pressures that aggregate up through reputations in supply chains. There’s a huge amount of
tactical surface there to do a better job of using
buying power as a force of good. – How many other people have kind of a general question of this? I’ve got one over here. Are there others in that queue, because I wanna get to
them if you’re out there. And then I want us to
take the last 15 minutes, if you know of a kind of
associational institutional experiment or attempt that’s out there that we haven’t captured
up here, share it with us. And then also think
about the big questions, potentially researchable questions, we can turn them into
researchable questions, that you’d like to see a
group of cross-disciplinary, cross-sector people take a crack
at at trying to understand. What can we do to get better at this? I’ll say this, this is a little bit of mini advertisement for another conference that’s coming up. There’s one at the end of this week the Digital Civil Society
Conference this year on campus. You’re welcome to join us there. That’s another scholar
practitioner conversation about a lot of these issues. And then there’s a
designing data governance conference happening in Washington. There’s lots of opportunities
we should be using these as building blocks and
moving towards something, rather than repeating conversations. So with that I’m gonna
take this last question and then ask you to think about bigger doable things we might undertake. – Thank you. Thank you. – Yeah it works.
– Go ahead. – Thank you. Very energizing talk and very inspiration. I have a question about your
comment about the warranty. And I whole heartily agree that the enforcement
mechanism is very important. And warranty could be a good means for enforcing such standard. But however, if you look at
how innovation takes place, especially, in this
digital tech companies, if you think about the
genesis of the companies, it’s very different from,
their initial original visions are very different from
what they look like today. If you think about Facebook,
Google, even Amazon, what they initially envisioned is not the service that
they’re providing today. So when they say, even at
Stanford Business School they teach that to be innovative you just pay attention
to what customers do and how users use your service as kind of don’t be rigid about your initial ideas, but go with the flow and
listen to the customers and provide a service that they want. So in that context of
the innovation framework, I think it’s hard to envision
that how you can define the warranty in the sense
that this is the service that we’re gonna provide
and this is the boundary of the service and they
do not use the service in any other way and I’m
gonna just provide a warranty only within the context. And moreover, if you look
at artificial intelligence like a robot, if you think
about robots can evolve and then learn from the context, we’re kind of gathering
information outdoor through interactions with other users. And if they start doing things that was not initially programmed to do, then who should be held accountable for the perils that results
from such behaviors? And we talked a lot about the standard of the data and data governance. But if you think about how
artificial intelligence or algorithms are developed,
the sampling of the data, the form of the data and how
data is handled and cleaned is part of the competitiveness, and it’s a very big part
of the trade sticker. That’s why one facial recognition
algorithm works better in certain circumstances
and not in another. So if you enforce things
that’s not quite in a way how the companies and
corporations operate, then I think it’s inevitable they think all these discussions are kind of irrelevant
to the way they work. And they continue not showing up, and not participating in
these kind of discussions because they can think
that they’re following all the standard and guidance is either irrelative or undermine their competitiveness or innovation. So I just wanted to hear
your thought on that. – Yeah. (audience laughing) Well a couple things. I think that the easy way to start is to say that if you
look at Helen Nissenbaum’s work on contextual integrity, right. The way that most things
work through entire course of human history is about how
we contextually understand each other and the representations that we make to each other in an environment or in a proposition. I often compare tech company design to (speaking foreign language) farming. So when you first get a goose,
everybody wants it to grow. And the (speaking foreign
language) farmer especially. And what happens is when the company gets to a certain size of maturity, the goose’s incentives and the
(speaking foreign language) farmers incentives
diverge pretty strongly. Right, they cut that up, they cut the goose up usually into pieces. So I think that that’s
probably like a thing to know at the beginning, right? Because what ends up happening, the types of pivots that you’re describing in analog commercial terms, or deceptive commercial practice, right. And so there’s a whole bunch of things that are original parts of the agreement that we undertake with companies that I don’t think we wanna
give totally unilateral license to companies to change. I think we’ve got a good series of reasons why we don’t do that. If you look actually, the more… if you think about technology
it’s sort of something that amplifies our power over each other, most of the way that say
ethical certification, ethical guidance and pathways
to marketing and design are about starting a little
bit of value proposition and then allowing a higher risk experiment to then validate that. And so you get mice trials, and then you go to human trials, and then you go to FDA approvals. Hopefully, got FDA
approvals in the beginning. But the point being that we
have lots and lots of ways that we manage dynamic pathways to market. And that they don’t have
to involve deception. Or that they don’t have to involve really, really materially
different changes. And I think that when services change their terms like that, we should
treat them as new services. Like WhatsApp data policies changing have put now hundreds of millions of people in the position of having the fundamental promises
made to them when they signed up to the platform completely
violated with no recourse. So I’m sure that whatever
experience Facebook will deliver via WhatsApp will be
magically worth hundreds of millions of peoples
expectations being violated. But I don’t know what that looks like yet, and I don’t think that for
them the equities are in place. So I hear what you’re saying. I kind of find the innovation
at all costs narrative to be really, I take your point about it being the state of play. And so something that we have to engage in if we want to an extent a
certain amount of participation. But I think that I was
really serious about standing up for one’s rights as
a radical act in a space where we’re told again and again that we have no ability to
hold anybody accountable. And somebody trained in international law, I just think we can do a lot better. And I think we can work
with both bigger carrots and bigger stakes. – Do you wanna follow up? – Yeah. – I mean other people I’m
sure have other thoughts here. – So I’m not saying that innovation should take place at all costs. I agree with you. But I’m just lost in a
sense that I don’t find adequate legal framework or standard that can be applicable in
this kind of situation, because I think it’s like our negligence standard is reasonableness. So, I mean, I think. It’s really hard to prove
that you can reasonably expect the use of this and that in the context. And at the same time context
of international law too is that there are cases where, I find analogy in the case
of use of child labor. So child labor is something
that we would like to avoid at all costs and then some
people boycott products made with child labors. But there are a lot of
trends in economy that, I don’t see that very
effective enforce mechanism that can be applicable
in all those situations. So, I’m just kind of trying to find and understand what is the right standard and framework that can be
applied in this very fluid and not a binary kind of situation. – I think you find yourself
in very good company in this room of people who
are also looking for standards that will increase enforcement. So I don’t think anyone’s
got a perfect answer. I’m more than happy to
dig in on negligence law. But maybe recognizing we’re on the clock. We’ll do that after. – There’s a lawyer checking the clock. (laughing) – Every six minutes. – That’s right. Ka ching, ka ching. Let’s just shift in direction
for the last few minutes here. From all of this are there
folks in the room here sort of sitting with, you
know what we need to know? We need to know X. What is the big question
that you’re left with or smaller questions that we might add up. Where are you feeling like
there’s some positive movement? Help us figure out from this conversation, from all of those ideas,
different domains of laws, different associational structures. What do we need next? What do we need to know? What do we need to think about? – I thought from this
discussion what is missing is things that bring the
result of this bias data and bias infrastructure alive. And in terms of examples of you know, I’m struggling with this
thing of having almost perfect data when we are flawed
and biased human beings. So the data is gonna be flawed. And perhaps we keep, I mean, the danger of course is we stop this great
innovation and great movement by actually creating speed breakers. I mean, one radical thought
is let it keep going and provide the rules after
we figure out how these a lot of these biases are working. – Even though there are people being heard in the immediate term. I think the challenge here,
I’m not sure that this particular session is where the question about data in particular fits in. But to be honest, I think
there’s a very strong sense that what you think of
as fabulous innovation is seen by an extraordinary
number of people who are often unheard as nothing but harm. And stopping it would be a step forward. So I think we’re looking
for middle ground there, but I encourage you to
consider leaving behind the narrative that innovation,
innovation at speed, innovation is an inherently
good thing with maybe some external costs when
the world is telling us and there were several sessions yesterday, those are lived
experiences and real people and they’re not externalities. So I think part of the challenge here is realizing that they’re two
very different narratives. And one should not be treated as secondary to the other. They’re both of primary concern. So let me take this
question and then here. – One thing I would like to
have a multi-disciplinary team tackle is how to inform people, users, policy makers, otherwise,
what are the limitations of the systems without EULAs. – Without what? – End user license agreement. Because those are just full of legalities. No one reads those. And no ones going to get
informed about those things. – So even just a better conversation between the product and the user? – Yes.
– Yup. – If you would, just slide that down. – Hi, so I have two things. One is related to what
this gentleman just said. Which is I think one way to think about, maybe it’s a way to expand
on what you just said. Which is, how do we think
about the difference between availability of information and access to information, when it comes to things like transparency, when it comes to things like
and users agreements, right. So we all click agree. But we almost certainly don’t understand what we’ve agreed to. So can we create processes that make it so that you actually have to try to ensure that people actually understand it, rather than they just click a box. And not just on end user agreements, but like when you’re releasing data, when you’re doing transparency reports. Are those available to people
in a way that you don’t have to be an expert to understand? That’s a rhetorical question. And do people know they’re available and where to find them
and how to find them and are they publicized? So I think there’s this question about availability versus accessibility. And I think that’s worth digging into. And another thing that I’m left with at the end of this is about measurement. And we had this a little bit on bias and particular in Joy’s presentation. There’s some people doing good work. How do we measure the harms and the value? Because we’re talking
here about trade offs and I think it’s hard to
know what the trade offs are when we don’t know how to
measure what the impacts are. And we have some measurement of the harm a little bit but not as much as we need. I think we have relatively little concrete measurement of the value. Like what’s the actual added value? And then, how do we
consider these trade offs and when the trade offs are between lives and a game or whatever. I mean I’m being a little facetious. But how do we create
measurements that alow us to have those conversation in more concrete ways. – Just pass it right back. – Thank you for your comments. One of the things we’re
struggling with in open forum is just definition. A common definition of
what transparency means, what harm means. ‘Cause what may appear harm to you, is cost of doing business
for someone else. And I’m not being flippant but
literally we’re sitting down and you have all these really
smart people trying to agree on basic principles but
what do they really mean? Definition. The other thing that I
think we have an opportunity in this group is so many white paper. I’m like white papered out. There’s so many
intelligent people thinking about these things intelligently,
but we need a platform, which is what I was hoping would get here, of a really strong collective, mini collective, common collective voice that says, you’re causing
confusion through academics, through business parlay, through even human rights
language in some cases. Can we cut the crap a little bit? Or cut through the crap. And it’s not all crap. There’s really smart discussions going on. Can we synthesize it in a way? Because guess what? We are building things while
people are debating things. No one is stopping the
AI Stanford building because waiting for this to figure out. And once we build things
in there in the system, it becomes punitive to stop it. But if you get ahead of the curve, which we’re talking about
in the design process, in the concept, you’re
partnering with the innovation. And so, that’s really
fundamentally the way we pose the question is how do you have the tools and information so that ethical people can make the right decision? ‘Cause we don’t even have that. – Thank you. Others. – Thank you. I mean I guess in the
conversation about associations and associational forms which is call back to your point about funding. I don’t know that that
was fully addressed. Both in terms of for specific associations or organizations like the
partnership on AI for example, that are being developed right now to do some of this work. Like who is funding that and
what are the impacts of that and should folks be
paying attention to that? And if so, how and why. And then to the point
about the non-profit model as the management of
private financial resources for the public good. Just that that arguably emerged at a specific political and economic moment as a mechanism for hoarding
wealth and concentrating power. And so with the
application of that to data as a associational form and
as an institutional form would just want to flesh out
that history a little more. And flesh out how that actually functions and how that could look
different in the context of data. Yeah. – Can I make one other point? – Can we make things more fun? ‘Cause like I wanna
work on responsible AI. But I kinda wanna have fun with it too. So if there’s ways that academics or people are thinking
about adding more art and there’s lovely, the poetry. And bringing that flavor of
what a different voice feels like from place of strength. So that it’s like oh
we’re not depressed now, everyone’s popping
something after work, ugh. But if there’s a way to do that, I think it would just
literally, that’s the humanity. – I think Stephanie talks. I was at the AI assembly
that she had this year. – Oh okay. – Yeah, so Stephanie is and
people are doing that work. So yeah. – But more integrated, you know. – No yeah absolutely more. – Let’s have fun. (audience laughing)
– I wrote it down. – Are you not entertained? – Any of you have final thoughts or comments you wanna share? (audience chattering) – It’s so too bad that
we’re out of time now. The dancer team we did, we
just talk about it a half time. I just wanna applaud really
quickly that November 8th in Washington, DC, we’re
hosting something called the Data Governance Design Conference. Which is very much an
intersectional event bringing together mostly data
governance practitioners, but also academics, policy
makers, funders and lawyers. Sorry about that last one. But we’ll have engineers there too, just in case that helps. But essentially, all aimed
at really really practice led question about articulating
a practice led research agenda for data governance. So how do we de risk and
enable more experimentation, more ethical use in infrastructure. And Lucy and Jasmine
will be joining us there. So the Data Governance Design Conference. Which you can find at
https.governingdata.org. (audience laughing) – Is this in DC? – It is, yeah. (audience laughing) Yeah, no it’s where all the fun is. (audience laughing) (audience chattering) – Thank you. (audience clapping)




Leave a Reply

Your email address will not be published. Required fields are marked *