Our first event of the semester, featuring Tufts University’s Moon Duchin and Georgetown’s Kobbi Nissim, gathered our biggest crowd yet. Our first speaker of the day, Prof. Duchin is an Associate Professor of mathematics at Tufts University and serves as director of Tufts’ interdisciplinary Science, Technology, and Society program. Her mathematical research is in geometric group theory, low-dimensional topology, and dynamics. She is the PI of the MGGG Redistricting Lab (mggg.org), a multidisciplinary team that works on the geometry and geography of electoral redistricting. In that capacity, Duchin works with legislators of a variety of scales, litigators, and civil society representatives.
Duchin’s talk, “Decision-Making Forensics,” consisted of a stimulating introduction to her work with the Redistricting Lab. “Forensics,” as Duchin reminded the audience, refers to taking a look at a situation and assessing what happened there previously. The idea behind the talk was, then, to walk us through how one can look at the shape of an electoral district to assess whether its borders are drawn to incur partisan advantage for a particular party—whether, more simply, the practice known as gerrymandering has occurred. “Boundaries,” Duchin told us, “have stories.” Gerrymandering is a particularly salient problem in the US, which is the only country in the world that 1/mandates regular re-drawing of districts, and 2/puts (partisan) elected officials in charge. The ultimate goal for Duchin’s lab is, in their capacity as geometers and mathematicians, to come up with formalizations of districting that eliminates partisan bias.
We learned how minimal the US constitution’s election guidance is, requiring only that states be apportioned House representation “according to their respective numbers.” (This is the same sentence, Duchin reminded us, that worked in tandem with counting enslaved people as 3/5 of humans before the abolition of slavery.) But this relative simplicity is quickly obscured. Counting people is hard, beginning with as prosaic a thing as how one cannot have 8.15 representatives in the House; human beings, alas, do not come in decimals. The problem of virtual representation lurks behind the mechanics of counting. Who counts for apportionment? Who votes? Whose interests are plausibly represented in decision-making milieus?
Electoral maps are guided by censuses; a census, in other words, is the underlying truth and repository of data for district makers. “Census “gives us the canonical data. It’s our truth.” There begins a tangle of problems, like the underfundedness of the Census Bureau. For example, when a practice like knocking on doors is replaced with email-based information collection due to funding limitations, inevitably a certain portion of society with access to the Internet will be favorably represented. Once census enumeration is published, it becomes a fact, frozen for ten years, regardless of its imperfections. District makers, including Duchin’s teammates, need to work with such data to mathematize the problem of districting and assess different groups’ proper representability.
But one must perhaps ask even a more basic question: why districting to begin with? Districts, the argument goes, provide opportunities for qualifying minorities to elect a candidate of choice. They are good at giving some form of representation to people who do not have majority. A group’s representability depends crucially (and mysteriously) on the geometry of its cartographic layout. For instance, segregation, which is inevitably a vehicle of discrimination, may also give the districting authority a huge deal of control in providing communities (which are otherwise national minorities) with representational power. Duchin illustrated these points with geometrical representations of the city of Tuskegee, AL, which voted to shrink itself in 1957. The city was so segregated that by redrawing it was able to eliminate the Black population without losing any of its white population.
The goal for a redistricter is to arrange equipopulous districts so that a minority with V% of the votes receives anywhere from 0 to 2V% of the representation. For instance, a 40% voting bloc can be either completely shut out, or can control 80% of the representation, depending on how the lines are drawn. The redistricter devises preferred scores of partisan tilt (e.g., partisan symmetry, efficiency gap) and aim for an ideal layout, or impose limits. Another step is to chart the universe of possibilities to discover a normal range of outcomes.
Duchin concluded her talk by noting that the opposite of gerrymandering is not harmony; it’s simply not gerrymandering. In other words, exact proportionality of seats to votes will not be achieved; what is achievable is to note what the math dictates in the absence of a partisan agenda. In that vein, “algorithmic governance” is not to automate, optimize, simulate, or predict, but to verify that a policy comports with its design principles.
Kobbi Nissim, our second speaker for the day, is a Professor of Computer Science at Georgetown University. He works towards establishing rigorous practices for privacy in computation: identifying problems that result from the collection, sharing, and processing of information, formalizing these problems and studying them towards creating solid practices and technological solutions. He is particularly interested in intersection points between privacy and various disciplines within and outside computer science including cryptography, machine learning, game theory, complexity theory, algorithmics, statistics, databases, and more recently privacy law and policy.
Nissim’s talk revolved around the incompatibility between mathematical and legal concepts of privacy and the need for rigorous analysis in public policy to bridge this gap. It is a field, he noted, that necessitates the joint work of legal scholars, computer scientists, and mathematicians.
The history of legal privacy concepts goes back, from FERPA and HIPAA in the US, to, more recently, the GDPR (General Data Protection Regulation) in the European Union. But oftentimes the definitions of privacy within these codes, Nissim contended, remained at the intuitive level, as opposed to rigorous or formal. We must, then, put together the two pieces of the puzzle—the legal and technical definitions of privacy. The original goal, here, is to make rigorous claims where the use of differential privacy satisfies legal requirements.
Nissim went over a number of projects that work towards this end. For instance, “robot lawyers,” a legal AI application, entails the automatic generation of a license for researchers downloading files from a social science data repository. In this project, inputs such as formalizations of legislation, license template, license terms, repository specific conditions, facts about dataset (via a questionnaire) result in outputs such as a human-readable license.
Another instance is the GDPR—the set of regulations that lays down the rules relating to the protection of natural persons with regard to the processing of personal data in the EU. Here again, the issue is to formalize the definitions of the categories in this statement. The fourth article of the GDPR suggests that personal data “means any information relating to an identified or identifiable natural person; and identifiable natural person is one who can be identified directly or indirectly.” But what does identifying mean in this case? Does it mean singling a person out? If there is exactly one person in the data with a particular set of attribute would amount to singling out that individual. Isolation or singling out, in the examples Nissim showed, had high probability rates.
The audience brought up a number of empirical and theoretical questions in the question & answer period. Duchin was asked overwhelmingly about the selection of a map for a district over others, which prompted her to highlight a passage from a Supreme Court case where Justice Samuel Alito expressed the same question. (“[Suppose] you have a computer program that … weights [all the so-called neutral criteria]” Justice Alito began, and expressed wonder in the face of the task of selecting a map from within what he thought was thousands of possibilities, when in fact that number is at the level of googols.) How would it be fair to choose any one other than random choice, he asked. Duchin welcomed this line of question for its receptiveness to the idea of generating non-gerrymandering maps by an algorithmic method. Another hard question is, she noted, will we be bound by the mathematical consequences of districts? This is a hard, normative question, because 50/50 voting does not give us 50/50 representation. Even so, Duchin argued against fellow mathematicians who might rush to discard one system of representation in favor of a completely new one. Duchin’s work with a variety of professional groups convinced her that districting methods can be sticky. This is particularly the case with congressional districts that can be locked into a particular method for good. This issue has prompted Duchin to explore and work with diverse forms of legislative bodies, like city councils. Cambridge, MA, for instance, uses a ranked choice system—the only such case in the country.
Our moderator Ali Khan brought up a question of how comforting mathematics can be when in the form of an “if-then” statement, which can begin to feel like a menace when the subject matters are privacy and legality. Nissim replied that any model should be judged simply by whether it is a good model of the problem and not by anything beyond that—no model is a model of the world. Wearing her STS hat, Duchin brought up the point that mathematics only made the deliberate move to limiting itself to an “if-then” practice in the 1920s. The move was a double-edged sword: it meant modesty on the one hand, and an abdication from the claim to talk about the world, on the other.