Paper of code: https://ieeexplore.ieee.org/abstract/document/10943725 or https://openaccess.thecvf.com/content/WACV2025/html/Al-Refai_FALCON_Fair_Face_Recognition_via_Local_Optimal_Feature_Normalization_WACV_2025_paper.html
Face recognition systems are widely used for identity verification in various fields. However, recent studies have highlighted bias issues related to demographic and non-demographic attributes such as accessories, haircolor, ethnicity, or gender. These biases lead to higher error rates for specific attribute subgroups. This is especially problematic in critical areas like forensics, where these systems are deployed. Addressing this issue requires a solution that reduces bias without compromising accuracy. Existing methods focus on learning less biased face representations, but they are often difficult to integrate into current systems or negatively impact overall recognition performance. This work introduces FALCON (Fair Adaptation Through Local Optimal Normalization), an effective method to increase fairness in face recognition systems. FALCON operates in an unsupervised manner, addressing bias without requiring demographic labels, and can be easily integrated as a post-processing step. It treats individuals with similar traits similarly, reducing bias in face recognition by processing each image individually. The proposed method is rigorously tested across various face recognition models and datasets, and compared with four other fairness post-processing methods. Results show that FALCON significantly enhances both fairness and accuracy. Unlike other methods, it allows seamlessly adjusting the fairness-accuracy trade-off while effectively addressing bias
- Folder Data should hold all files regarding the database information. For each investigated face recognition system and database is holds the pre-processed embeddings, the filenames, the identity information and the attribute labels for each sample.
- It should have the following structure:
- Data:
- [FRSystem]:
- [Dataset]:
- emb.npy: embeddings of the face recognition system
- filenames.npy: filenames of the images
- identities.npy: identity information of the images
- labels_{age/gender/...}.npy: attribute labels of the images
- [Dataset]:
- [FRSystem]:
- Data:
- It should have the following structure:
- Folders Baseline, FALCON and FSN each hold a Python file for executing the corresponding method. After executing the experiments, it will also contain a folder result with all the
Result
objects that contain the evaluation results after applying the method on several settings - Folder helper_classes holds several files that provide classes which are helpful for applying the methods
- File dataset.py provides a
Database
class which supplies all information and files regarding a specific database - File fairness_approach.py provides the base class
FairnessApproach
all methods Baseline, FALCON and FSN inherit from. It contains the interface methods and the common methods. - File result.py provides a class
Result
which is an object that contains and makes available all evaluation results in a structured way. - File dataset_infos.py does not provide a class but information regarding a specific
Database
object regarding the distribution of the labels for each attribute
- File dataset.py provides a
- Folder tools holds several files that are helpful for the methods and their evaluation
- File enums.py holds all enumerations that are used within this thesis. It holds enumerations for the
FRSystem
, theDataset
, theDatatype
(embeddings, filenames or identities of datasets), theAttribute
, theMetric
and theMethod
. - File fairness_evaluation.py holds the methods for calculating the fairness metrics
FDR
,IR
andGARBE
. - File fnmr_fmr.py holds the methods to calculate the False Non Match Rates, False Match Rates and thresholds depending on labels, scores and either a fixed FMR or a fixed threshold.
- File group_scores.py provides methods to access the files in folder Data.
- File enums.py holds all enumerations that are used within this thesis. It holds enumerations for the
- Folder experiments holds the file for the parameter experiment
- File baseline_visualization.py provides methods to visualise the optimal local thresholds, before and after applying FALCON in a plot
- File parameter_experiment.py executes the parameter experiment analyzing the trade-off between fairness and performance in face recognition systems using various parameter combinations.
- File visualize-FSN-FALCON.py provides methods to create images for visualize the difference between FSN and FALCON as well as the clustering of FSN using a Voronoi diagram.
- Folder timing holds the results of the timing experiment
- File main.py is the main file that is used to execute the experiments.
- Create a new folder with the name of your method (e.g. MyMethod)
- Create a new Python file with the name of your method (e.g. mymethod.py)
- Create a new class with the name of your method (e.g. MyMethod) that inherits from the
FairnessApproach
class in helper_classes/fairness_approach.py and implement at least the methods 'init', 'train', 'test' and 'save_results'. - In tools/enums.py add your method to the
Method
enumeration - In the main.py import your method and add it to the according methods
- Create a new folder with the name of your dataset (e.g. MyDataset) in the folder of the face recognition system (e.g. FRSystem) as described above
- Add the required files (embeddings, filenames, identities and labels) to the folder
- In tools/enums.py add your dataset to the
Dataset
enumeration - In the main.py import your dataset and use it there
If you use this code, please cite the following paper:
@InProceedings{Al-Refai_2025_WACV,
author = {Al-Refai, Rouqaiah and Hempel, Philipp and Biagi, Clara and Terh\"orst, Philipp},
title = {FALCON: Fair Face Recognition via Local Optimal Feature Normalization},
booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)},
month = {February},
year = {2025},
pages = {3416-3426}
}
This project is licensed under the terms of the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license.
Copyright (c) 2025 Philipp Hempel