Current transport systems do not sufficiently take into account the characteristics of women in the design of products and services, and in fostering equal opportunities of employability in the industry. As a priority action, the European Commission launched the “Women in Transport – EU Platform for change” in November 2017, to exchange best practices on strengthening women’s employment and equal opportunities in the transport sector.
Even though there is a concrete need to consider women’s characteristics and needs, it is challenging to do so in the multiple perspectives where women can play a role in transport systems.
It is urgent to collect and analyse data on women’s expectations and needs to be able to guarantee inclusive transport systems and close the gender gap in the sector.
The large amount of data that is generated nowadays can certainly help collecting information about women and their behaviour, through more and more sophisticated algorithms and the employment of novel technologies, such as Artificial Intelligence (AI).
Accurate insights, however, are not enough to support women in effective ways. One of the main issues in these automatized processes relates to a strong dependency from the data, which are biased by nature as men are often much more represented than women.
Bias present in the data may then be reinforced by AI algorithms, since they learn patterns on the data they observe. A recent example is the recruitment tool that Amazon was using, powered by AI algorithms, which was scrapped because it took sexist decisions; indeed, it had learnt from the data that men fit better high-level positions.
Even when reinforced by AI algorithms, data is biased towards men and can foster sexist decisions.
Fair algorithms to overcome data bias
Fairness is a property that allows to build decision-making algorithms that do not discriminate against people on the basis of sensitive issues, such as gender, religion, or disability. In order to build fair algorithms, it is important to define measures that allow us to characterise what it means to be fair (i.e., under which conditions an algorithm does not discriminate users). Thanks to these measures, it is possible to build fair algorithms through three main approaches:
- a pre-processing of the data, to reduce the difference between the protected class (in our case women) and the non-protected one; in this way, an algorithm is supposed to avoid learning possible discriminations coming from the data;
- by operating in-processing, i.e., by plugging the fairness measures inside the algorithms, making them operate fairly by design;
- or to build a post-processing, which usually re-ranks the unfair results produced by an algorithm, thus mitigating the unfairness coming from it.
It becomes imperative to build fair transport systems for women, to make sure they are not discriminated under any condition. This means that fairness for women should be guaranteed whatever role they play in a transport system, e.g., as users of public transport, as users of shared services (like bike sharing), as passengers, or as employees.
The goal of the DIAMOND project is to turn data into actionable knowledge about women’s needs in transport systems. Thanks to this actionable knowledge, we aimto define notions of fairness that will allow us to design systems that are more inclusive towards women.
These notions of fairness will be plugged inside decision-making algorithms. More specifically, the project will develop decision-support systems, which will allow transport operators to assess their level of inclusiveness and take concrete action to mitigate possible unfairness. As an example, a transport operator might upload data about the current conditions of a station (e.g., in terms of security, lights, opening hours). Our decision-support system would analyse under which conditions a station usually provides a fair service for women and provide recommendations of the type: “In the station ID z, you need x cameras to increase the feeling of safety. You currently only have y”.
DIAMOND project will develop a decision-support system to fight gender inequality in the transport sector.
In addition to that, the project will also provide tools to allow stakeholders to perceive how the quality of the service improves when a station runs on fair conditions, based on the experience of other stations collected during the project. This framework would lead to create a fair system in the long-term, less prone to stimulate social inequalities in the public transport sector.
Ludovico Boratto
Senior Researcher, Data Science & Big Data Analytics Unit, Eurecat
Ludovico Boratto is senior research scientist in the Data Science and Big Data Analytics research group at Eurecat. His research interests focus on Data Mining and Machine Learning approaches, mostly applied to recommender systems and social media analysis. The results of his research have been published in top-tier journals, such as Information Sciences (Elsevier) and IEEE Intelligent Systems. His research activity also brought him to give talks and tutorials at top-tier conferences (e.g., ACM RecSys 2016, IEEE ICDM 2017) and research centers (Yahoo! Research). He is editor of the book “Group Recommender Systems: An Introduction”, published by Springer. He is also guest editor of special issues for several ISI and Scopus indexed journals. He has been part of the program committee of the main Data Mining and Web conferences, such as RecSys, WSDM, ICWSM, and TheWebConf. In 2012, he got a Ph.D. at the University of Cagliari (Italy), where he was research assistant until May 2016. In 2010 and 2014 he spent 10 months at Yahoo! Research in Barcelona as a visiting researcher. He is member of the ACM and of the IEEE.