Skip to content

[Pattern Recognition, 2025] A public and reproducible way to benchmark keystroke-based user recognition systems in desktop and mobile scenarios using large-scale databases.

Notifications You must be signed in to change notification settings

gstrag/Keystroke-Verification-Challenge

Repository files navigation

Welcome to the GitHub repository of the Keystroke Verification Challenge (KVC) - onGoing. The challenge provides a public and reproducible way to benchmark keystroke-based user recognition systems in desktop and mobile scenarios, using large-scale databases and a standard experimental protocol.

This ongoing challenge is based on the limited-time KVC organized within IEEE BigData 2023.

The details and results of the KVC are summarized in the following paper:

For more information about keystroke dynamics, the databases, and the challenge, please consult the following additional resources:

Repository Structure

The main files included in this repository are:

  • utils/configs.py contains configuration settings to run experiments. When launching the first training, a folder called <configs.experiment_name>/ will be created. As you run the different scripts, a sub-directory structure identical to pretrained/ will be created.
  • train.py will launch the training of a simple RNN defined in models/RNN.py' with contrastive loss and two features. The training script will select the dataset based on the variable configs.scenario which must be 'desktop' or 'mobile'. This script will create the models and the log files.
  • generate_submission_files.py: it will run the evaluation and generate the zip file of prediction(s) considering the comparison list(s) included in the downloaded files. Do not change the file names of the text files inside the compressed file (desktop_predictions.txt, mobile_predictions.txt), but the name of the zip file can be changed. <configs.experiment_name>.zip is ready to be submitted to CodaLab for scoring.
  • read_logs.py will plot the loss and the EER on both the training and validation sets across the training epochs.
  • utils/metrics.py is used to compute the metrics reported below.

Data Download

First, it is necessary to enroll in the KVC. Please, follow the instructions:

  1. Fill up this form including your information.

  2. Sign up in this form using the same email introduced in step 1).

You are now able to join the KVC-onGoing on CodaLab.

Then, on the KVC CodaLab page, go to the Participants tab, then click on Resources for participants and download the competition Public Data.

Leaderboard

Public Code from Participants

Citations

If you use any of the parts of this repository, please cite:

@article{bigdata,
  title={{IEEE BigData 2023 Keystroke Verification Challenge (KVC)}},
  author={G. Stragapede, R. Vera-Rodriguez, R. Tolosana \textit{et al.}},
  journal={Proc. IEEE Int. Conf. on Big Data},
  year={2023}
}

@article{stragapede2023kvc,
      title={{Keystroke Verification Challenge (KVC): Biometric and Fairness Benchmark Evaluation}}, 
      author={Giuseppe Stragapede and Ruben Vera-Rodriguez and Ruben Tolosana and Aythami Morales and Naser Damer and Julian Fierrez and Javier Ortega-Garcia},
      year={2023},
      journal = {IEEE Access},
}

Extended Results

Below it is possible to find an example of the format of the results computed by the CodaLab scoring program. The results displayed are achieved by the LSIA Team, that is currently in the first place of the KVC for both tasks (desktop, mobile).

After submitting your scores, to view the complete results, click on Detailed results on the CodaLab leaderboard. The leaderboard just shows the global EER (%).

Each one of the reported metrics and curves are obtained using the functions that can be found in utils.metrics.

Task: desktop
Global Distributions

EER [%] FNMR @0.1% FMR [%] FNMR @1% FMR [%] FNMR @10% FMR [%] AUC [%] Accuracy [%]
3.33 44.1673 11.958 0.5071 99.4761 96.676
Mean Per-Subject Distributions
EER [%] AUC [%] Accuracy [%] Rank-1 [%]
0.7718 99.8713 96.4278 98.044
Fairness
STD [%] SER FDR IR GARBE SIR_a [%] SIR_g [%]
0.6418 1.0249 97.061 2.0791 0.1316 4.0316 3.045
Global Accuracy by Demographic Group [%]
  M F Mean
10-13 96.6949 95.9447 96.3198
14-17 96.6405 96.2649 96.4527
18-26 96.7689 96.5207 96.6448
27-35 96.5879 96.2198 96.4038
35-44 96.0336 95.7015 95.8676
45-79 96.1689 94.4155 95.2922
Mean 96.44 95.8245  
Impostor Distances by Age Group
  10-13 14-17 18-26 27-35 35-44 45-79
10-13 0.4005 0.3793 0.3768 0.371 0.3724 0.3777
14-17 0.3793 0.3899 0.3817 0.3778 0.3785 0.3763
18-26 0.3768 0.3817 0.3816 0.3787 0.3787 0.38
27-35 0.371 0.3778 0.3787 0.3867 0.3841 0.3849
35-44 0.3724 0.3785 0.3787 0.3841 0.3978 0.3962
45-79 0.3777 0.3763 0.38 0.3849 0.3962 0.413
Impostor Distances by Gender Group
  M F
M 0.3866 0.3785
F 0.3785 0.3935


Task: mobile
Global Distributions
EER [%] FNMR @0.1% FMR [%] FNMR @1% FMR [%] FNMR @10% FMR [%] AUC [%] Accuracy [%]
3.61 63.616 17.4376 0.5992 99.278 96.3872
Mean Per-Subject Distributions
EER [%] AUC [%] Accuracy [%] Rank-1 [%]
1.034 99.7594 96.24 96.11
Fairness
STD [%] SER FDR IR GARBE SIR_a [%] SIR_g [%]
0.6654 1.0254 94.327 4.0105 0.2137 5.111 4.8338
Global Accuracy by Demographic Group [%]
  M F Mean
10-13 95.8898 96.0118 95.9508
14-17 96.0405 95.5469 95.7937
18-26 96.433 94.9858 95.7094
27-35 96.4899 96.0603 96.2751
35-44 96.9359 96.2244 96.5802
45-79 97.4024 95.3293 96.3658
Mean 96.6603 95.6293  
Impostor Distances by Age Group
  10-13 14-17 18-26 27-35 35-44 45-79
10-13 0.2964 0.2942 0.29 0.2867 0.2809 0.2773
14-17 0.2942 0.3058 0.2967 0.2908 0.2849 0.2735
18-26 0.29 0.2967 0.3183 0.2994 0.2982 0.2923
27-35 0.2867 0.2908 0.2994 0.3015 0.2902 0.2902
35-44 0.2809 0.2849 0.2982 0.2902 0.3017 0.2997
45-79 0.2773 0.2735 0.2923 0.2902 0.2997 0.3031
Impostor Distances by Gender Group
  M F
M 0.2977 0.2936
F 0.2936 0.3179

About

[Pattern Recognition, 2025] A public and reproducible way to benchmark keystroke-based user recognition systems in desktop and mobile scenarios using large-scale databases.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages