Welcome to the GitHub repository of the Keystroke Verification Challenge (KVC) - onGoing. The challenge provides a public and reproducible way to benchmark keystroke-based user recognition systems in desktop and mobile scenarios, using large-scale databases and a standard experimental protocol.
This ongoing challenge is based on the limited-time KVC organized within IEEE BigData 2023.
The details and results of the KVC are summarized in the following paper:
For more information about keystroke dynamics, the databases, and the challenge, please consult the following additional resources:
The main files included in this repository are:
utils/configs.pycontains configuration settings to run experiments. When launching the first training, a folder called<configs.experiment_name>/will be created. As you run the different scripts, a sub-directory structure identical topretrained/will be created.train.pywill launch the training of a simple RNN defined inmodels/RNN.py'with contrastive loss and two features. The training script will select the dataset based on the variableconfigs.scenariowhich must be'desktop'or'mobile'. This script will create the models and the log files.generate_submission_files.py: it will run the evaluation and generate the zip file of prediction(s) considering the comparison list(s) included in the downloaded files. Do not change the file names of the text files inside the compressed file (desktop_predictions.txt,mobile_predictions.txt), but the name of the zip file can be changed.<configs.experiment_name>.zipis ready to be submitted to CodaLab for scoring.read_logs.pywill plot the loss and the EER on both the training and validation sets across the training epochs.utils/metrics.pyis used to compute the metrics reported below.
First, it is necessary to enroll in the KVC. Please, follow the instructions:
-
Fill up this form including your information.
-
Sign up in this form using the same email introduced in step 1).
You are now able to join the KVC-onGoing on CodaLab.
Then, on the KVC CodaLab page, go to the Participants tab, then click on Resources for participants and download the competition Public Data.
If you use any of the parts of this repository, please cite:
@article{bigdata,
title={{IEEE BigData 2023 Keystroke Verification Challenge (KVC)}},
author={G. Stragapede, R. Vera-Rodriguez, R. Tolosana \textit{et al.}},
journal={Proc. IEEE Int. Conf. on Big Data},
year={2023}
}
@article{stragapede2023kvc,
title={{Keystroke Verification Challenge (KVC): Biometric and Fairness Benchmark Evaluation}},
author={Giuseppe Stragapede and Ruben Vera-Rodriguez and Ruben Tolosana and Aythami Morales and Naser Damer and Julian Fierrez and Javier Ortega-Garcia},
year={2023},
journal = {IEEE Access},
}
Below it is possible to find an example of the format of the results computed by the CodaLab scoring program. The results displayed are achieved by the LSIA Team, that is currently in the first place of the KVC for both tasks (desktop, mobile).
After submitting your scores, to view the complete results, click on Detailed results on the CodaLab leaderboard. The leaderboard just shows the global EER (%).
Each one of the reported metrics and curves are obtained using the functions that can be found in utils.metrics.
Task: desktop
Global Distributions
| EER [%] | FNMR @0.1% FMR [%] | FNMR @1% FMR [%] | FNMR @10% FMR [%] | AUC [%] | Accuracy [%] |
| 3.33 | 44.1673 | 11.958 | 0.5071 | 99.4761 | 96.676 |
| EER [%] | AUC [%] | Accuracy [%] | Rank-1 [%] |
| 0.7718 | 99.8713 | 96.4278 | 98.044 |
| STD [%] | SER | FDR | IR | GARBE | SIR_a [%] | SIR_g [%] |
| 0.6418 | 1.0249 | 97.061 | 2.0791 | 0.1316 | 4.0316 | 3.045 |
| M | F | Mean | |
| 10-13 | 96.6949 | 95.9447 | 96.3198 |
| 14-17 | 96.6405 | 96.2649 | 96.4527 |
| 18-26 | 96.7689 | 96.5207 | 96.6448 |
| 27-35 | 96.5879 | 96.2198 | 96.4038 |
| 35-44 | 96.0336 | 95.7015 | 95.8676 |
| 45-79 | 96.1689 | 94.4155 | 95.2922 |
| Mean | 96.44 | 95.8245 |
| 10-13 | 14-17 | 18-26 | 27-35 | 35-44 | 45-79 | |
| 10-13 | 0.4005 | 0.3793 | 0.3768 | 0.371 | 0.3724 | 0.3777 |
| 14-17 | 0.3793 | 0.3899 | 0.3817 | 0.3778 | 0.3785 | 0.3763 |
| 18-26 | 0.3768 | 0.3817 | 0.3816 | 0.3787 | 0.3787 | 0.38 |
| 27-35 | 0.371 | 0.3778 | 0.3787 | 0.3867 | 0.3841 | 0.3849 |
| 35-44 | 0.3724 | 0.3785 | 0.3787 | 0.3841 | 0.3978 | 0.3962 |
| 45-79 | 0.3777 | 0.3763 | 0.38 | 0.3849 | 0.3962 | 0.413 |
| M | F | |
| M | 0.3866 | 0.3785 |
| F | 0.3785 | 0.3935 |
Task: mobile
Global Distributions
| EER [%] | FNMR @0.1% FMR [%] | FNMR @1% FMR [%] | FNMR @10% FMR [%] | AUC [%] | Accuracy [%] |
| 3.61 | 63.616 | 17.4376 | 0.5992 | 99.278 | 96.3872 |
| EER [%] | AUC [%] | Accuracy [%] | Rank-1 [%] |
| 1.034 | 99.7594 | 96.24 | 96.11 |
| STD [%] | SER | FDR | IR | GARBE | SIR_a [%] | SIR_g [%] |
| 0.6654 | 1.0254 | 94.327 | 4.0105 | 0.2137 | 5.111 | 4.8338 |
| M | F | Mean | |
| 10-13 | 95.8898 | 96.0118 | 95.9508 |
| 14-17 | 96.0405 | 95.5469 | 95.7937 |
| 18-26 | 96.433 | 94.9858 | 95.7094 |
| 27-35 | 96.4899 | 96.0603 | 96.2751 |
| 35-44 | 96.9359 | 96.2244 | 96.5802 |
| 45-79 | 97.4024 | 95.3293 | 96.3658 |
| Mean | 96.6603 | 95.6293 |
| 10-13 | 14-17 | 18-26 | 27-35 | 35-44 | 45-79 | |
| 10-13 | 0.2964 | 0.2942 | 0.29 | 0.2867 | 0.2809 | 0.2773 |
| 14-17 | 0.2942 | 0.3058 | 0.2967 | 0.2908 | 0.2849 | 0.2735 |
| 18-26 | 0.29 | 0.2967 | 0.3183 | 0.2994 | 0.2982 | 0.2923 |
| 27-35 | 0.2867 | 0.2908 | 0.2994 | 0.3015 | 0.2902 | 0.2902 |
| 35-44 | 0.2809 | 0.2849 | 0.2982 | 0.2902 | 0.3017 | 0.2997 |
| 45-79 | 0.2773 | 0.2735 | 0.2923 | 0.2902 | 0.2997 | 0.3031 |
| M | F | |
| M | 0.2977 | 0.2936 |
| F | 0.2936 | 0.3179 |








