Regulating Facebook to make it an information fiduciary

By Yves Faguy April 12, 201812 April 2018

Regulating Facebook to make it an information fiduciary

If anything, Mark Zuckerberg’s testimony before Congress this week has succeeded in kicking off a debate about how to go about regulating companies like Facebook has begun, even though it is not entirely clear yet to lawmakers what, exactly, they think requires their intervention.

In fact, they seem more inclined to ask Facebook how it thinks it should be regulated, in large part because it’s dawning on everyone that Facebook’s business isn’t always easily described – some would call it a shape shifter. Which is a fair description when one considers it is at once a media company, a business that trades in personal data or a tech platform.

Ultimately though, what matters is that Facebook makes money off people’s personal data, and so the real issue is, how do we get companies like it to handle that data responsibly? Here’s one interesting suggestion from Jack M. Balkin who raises some interesting questions at the intersection of personal data use and artificial intelligence. He proposes that lawmakers ensure that online service providers become “information fiduciaries” vis-à-vis their customers, clients, and end users:

We provide lots of information about ourselves — some of it quite sensitive — to people and organizations who owe us fiduciary duties or duties of confidentiality. And when we provide this information, we have, and should have, a reasonable expectation that they will respect our privacy. We have a reasonable expectation that disclosing this information to them, or allowing them to collect it from us, is not the same as making the information available to the public generally. We have a reasonable expectation, in other words, that people and organizations who owe duties of trust and confidence to us will not betray us. Indeed, the law creates and recognizes relationships of trust and confidence precisely because it wants people to have reasonable expectations of privacy in certain relationships.

To some degree, the coming into force of the EU’s General Data Protection Regulation (GDPR) is a step in that direction by forcing companies through corporate data governance to more take better care of people’s data. Balkin, however, recognizes the inherent limits, however to this solution:

If it is inappropriate for a company to try to use collections of personal data to embarrass or manipulate end-users, the fact that it uses artificial intelligence or algorithms to do so hardly matters. Indeed, the fact that companies have increasingly powerful algorithms and artificial intelligence agents at their disposal means only that they have ever more power over their end-users, and therefore a greater responsibility to exercise appropriate care and loyalty.

The more interesting problem arises when algorithms use data about other people (or about large populations) in order to make predictions about users, and users have no relationship of trust or confidence with the enterprise that uses the algorithm. Examples are situations in which we apply for credit; or seek employment, housing, or business opportunities.  Companies will use algorithms and artificial intelligence to try to predict people’s behavior in advance. Their construction of social spaces and social opportunities will affect more than their base of end-users and clients.

When we are the end-user, client or customer of such a company, the problem of discrimination and manipulation arises out of an abuse of trust in the use of our personal information. But when we are not an end-user, client, or customer, there is no violation of a special relationship.


Filed Under:
No comments

Leave message

 Security code