Streaming Live: https://www.youtube.com/user/citpprinceton
Food and discussion begin at 12:30pm. Open to current Princeton faculty, staff, and students. Open to members of the public by invitation only. Please contact Laura Cummings-Abdo at if you are interested in attending a particular lunch.
Important decisions about people are increasingly made by algorithms: Votes are counted; voter rolls are purged; financial aid decisions are made; taxpayers are chosen for audits; air travelers are selected for enhanced search; credit eligibility decisions are made. Citizens, and society as a whole, have an interest in making these processes more transparent. Yet the full basis for these decisions is rarely available to affected people: the algorithm or some inputs may be secret; or the implementation may be secret; or the process may not be precisely described. A person who suspects the process went wrong has little recourse. And an oversight authority who wants to ensure that decisions are made according to an acceptable policy has little assurance that proffered decision rules match decisions for actual users.
Traditionally, Computer Science addresses these problems by demanding a specification of the desired behavior, which can then be enforced or verified. But this model is poorly suited to real- world oversight tasks, where the specification might be complicated or might not be known in advance. For example, laws are often ambiguous precisely because it would be politically (and practically) infeasible to give a precise specification of their meaning. Instead, people do their best to approximate what they believe the law will allow and disputes about what is actually allowed happen after-the-fact via expensive investigation and adjudication (e.g. in a court or legislature). As a result, actual oversight, in which real decisions are reviewed for their correctness, fairness, or faithfulness to a rule happens only rarely, if at all.
We present a novel approach to relating the tools of technology to the problem of overseeing decision making processes. Our methods use the tools of computer science to cryptographically ensure the technical properties that can be proven, while providing the necessary information so that a political, legal, or social oversight process can operate effectively. First, we present a system for the accountable execution of legal warrants, in which the decision by a judge to allow an investigator access to private or sensitive records is operationalized cryptographically, so that the investigator’s access to sensitive information is limited to only that information which the judge has explicitly allowed (and this can be confirmed by a disinterested third party). This system is an example of the current style of technical systems for accountability: a well-defined policy, specified in advance, is operationalized with technical tools. In this system, however, the goal is not just to enforce a policy, but to convince others that the policy is being enforced correctly. Second, we present accountable algorithms, unifying the tools of zero-knowledge computational integrity with cryptographic commitments to design processes that admit meaningful after-the-fact oversight, consistent with the norm in law and policy. Accountable algorithms can attest to the valid operation of a decision policy even when all or part of that policy is kept secret.
As an example, consider a government tax authority that is deciding which taxpayers to audit. Taxpayers are worried that audit decisions may be based on bias or political agenda rather than legitimate criteria; or they may be worried that the authority’s code is buggy. The authority does not want to disclose the details of its decision algorithm, for fear that tax evaders will be able to avoid audits. The accountable algorithms framework will allow the tax authority to maintain the secrecy of its algorithm (in the sense that any observer learns nothing about the algorithm beyond what is conveyed by whatever input-output pairs that observer can see) while allowing each taxpayer to verify that:
-the authority committed to its secret algorithm in advance,
-the result asserted by the authority is the correct output of the authority’s algorithm when applied to the individual taxpayer’s data, and
-the authority can reveal its algorithm to an oversight body (such as a court or legislature) for examination later, and taxpayers can verify that the revealed algorithm is the same one used to make decisions about them.
Joshua A. Kroll is a PhD candidate in Computer Science at the Center for Information Technology Policy at Princeton University, where he is advised by Edward W. Felten. His research spans computer security, privacy, and the interplay between technology and public policy, with a particular focus on how to design automated processes for accountability. He received the National Science Foundation Graduate Research Fellowship in 2011.