HS2022: 62520 Seminar Advanced Topics in Learning, Privacy and Fairness

This course looks at recent research in privacy and fairness in machine learning from a theoretical, algorithm and legal perspective. It is best taken in parallel with the basic course on fairness and privacy.

Allgemeine Informationen

Kursbeschreibung
This course goes through the fundamental mathematical and legal aspects of privacy and fairness. It examines the mathematical theory of differential privacy, and how it relates to robustness and reproducibility in machine learning. Considerable time is also spent on the theory of algorithmic fairness. It also relates those definitions to legal obligations of privacy and fairness.

Evaluation is done through critical reading, reviews and presentations of important papers in this area. Every student reads one paper every week. There are 1-2 presentations per week. Each student presents at between 1 and 2 papers, depending on how many students are in the course. A total of 10-15 papers will be presented.
Kursprogramm
A number of papers on differential privacy, fairness and reproducibility. This is a tentative list:

1. Legal and general papers:


- Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, Ohm, 2009.
- Big Data's Disparate Impact. Barocas and Selbst, 2016.
- Robust De-anonymization of Large Sparse Datasets. Narayanan and Shmatikov.
- A Survey Technique for Eliminating Evasive Answer Bias, Warner, 1965.

2. Differential privacy

- Calibrating noise to sensitivity in private data analysis]].(Approximate DP: See also https://github.com/frankmcsherry/blog/blob/master/posts/2017-02-08.md )
- The staircase mechanism in differential privacy. Geng et al. 2015.
- Renyi Differential Privacy. Mironov, 2017.
- Distributed Differential Privacy via Shuffling. Cheu et al, 2019.
- Federated Naive Bayes under Differential Privacy.]] Marchioro et al.
- Resolving Individuals Contributing Trace Amounts of DNA to Highly Complex Mixtures Using High-Density SNP Genotyping Microarrays
- Needles in the Haystack: Identifying Individuals Present in Pooled Genomic Data]], Braun et al. 2009.
- Privacy Preserving GWAS Data Sharing. Uhlerop et al. 2013.
- A New Analysis of Differential Privacy’s Generalization Guarantees, Jung et al. 2019.

3. Fairness
- Fair prediction with disparate impact: A study of bias in recidivism prediction instruments, Chouldechova, 2017.
- Inherent Trade-Offs in the Fair Determination of Risk Scores]], Kleinberg et al. 2016.
- Meritocratic Fairness for Cross-Population Selection, Kearns et al. 2017.
- Fairness through awareness, Dwork et al. 2011.


Zielgruppe
This course is for Master students in computer science, economics, law, statistics, thinking of doing a thesis in this topic. For students in computer science, economics or statistics, it is highly recommended to take the basic course in privacy and learning in parallel.

Beschreibung

This course looks at recent research in privacy and fairness in machine learning from a theoretical, algorithm and legal perspective. It is best taken in parallel with the basic course on fairness and privacy.

Allgemein

Sprache
Englisch
Copyright
This work has all rights reserved by the owner.

Kontakt

Name
Christos Dimitrakakis
E-Mail
christos.dimitrakakis@unine.ch
Sprechstunde
By appointment

Verfügbarkeit

Zugriff
Unbegrenzt – wenn online geschaltet
Aufnahmeverfahren
Sie können diesem Kurs direkt beitreten.
Zeitraum für Beitritte
Unbegrenzt
Veranstaltungszeitraum
27. Sep 2022, 14:15 - 13. Dez 2022, 16:00

Für Kursadministratoren freigegebene Daten

Daten des Persönlichen Profils
Anmeldename
Vorname
Nachname
E-Mail