Towards transparency and privacy in the online advertising business

Discrimination- and Privacy-Aware Data Mining

Lille, France, hosted this year’s FAT ML 2015 Workshop in July. Amongst the speakers at this event was EURECAT’s Sara Haijan, who delivered a compelling talk on Discrimination- and Privacy-Aware Data Mining. EURECAT is a member in the consortium developing TYPES.

Ms. Haijan’s talk was based on the valid premise that in the information society, massive and automated data collection occurs as a consequence of the ubiquitous digital traces we all generate in our daily life. The availability of such wealth of data makes its publication and analysis highly desirable for a variety of purposes, including policy making, planning, marketing, research, etc. Yet, the real and obvious benefits of data analysis and publishing have a dual, darker side. There are at least two potential threats for individuals whose information is published: privacy invasion and potential discrimination. Privacy invasion occurs when the values of published sensitive attributes can be linked to specific individuals (or companies). Discrimination is unfair or unequal treatment of people based on membership to a category, group or minority, without regard to individual characteristics.

Furthermore, Ms. Haijan went on to notice how on the legal side, parallel to the development of privacy legislation, anti-discrimination legislation has undergone a remarkable expansion, and it now prohibits discrimination against protected groups on the grounds of race, color, religion, nationality, sex, marital status, age and pregnancy, and in a number of settings, like credit and insurance, personnel selection and wages, and access to public services. On the technology side, efforts at guaranteeing privacy have led to developing privacy preserving data mining (PPDM) and efforts at fighting discrimination have led to developing anti-discrimination techniques in data mining. Some proposals are oriented to the discovery and measurement of discrimination, while others deal with preventing data mining (DPDM) from becoming itself a source of discrimination, due to automated decision making based on discriminatory models extracted from inherently biased datasets. Ms. Haijan then described some of these techniques for discrimination prevention, simultaneous discrimination and privacy protection, and discrimination discovery, while also showing some recent results.

You may also like
Sara Haijan, Eurecat-Technology center of Catalonia, speaks on Discovery and Prevention of Algorithmic Discrimination and Fairness
Data Transparency Lab Conference to be held in November 2015 at MIT
Prominent TYPES Researcher, Nikolaos Laoutaris, Gives Talk on Transparency at Cornell and Yale
Universidad Carlos III de Madrid’s Ángel Cuevas Rumín Gives talk at the XRCE

Leave a Reply