Official repository for the paper Fast Feature Field (F3): A Predictive Representation of Events.
Richeek Das, Kostas Daniilidis, Pratik Chaudhari
GRASP Laboratory, University of Pennsylvania
[π Paper] β’ [π¬ Video] β’ [π Website] β’ [π BibTeX]
F3 is a predictive representation of events. It is a statistic of past events, sufficient to predict future events. We prove that such a representation retains information about the structure and motion in the scene. F3 architecture is designed specifically for high-performance processing of events. F3 achieves low-latency computation by exploiting the sparsity of event data using a multi-resolution hash encoder and permutation-invariant architecture. Our implementation can compute F3 at 120 Hz and 440 Hz at HD and VGA resolutions, respectively, and can predict different downstream tasks at 25-75 Hz at HD resolution. These HD inference rates are roughly 2-5 times faster than the current state-of-the-art event-based methods. Please refer to the paper for more details.
See "Using F3 with torch.hub" below for a quick way to load F3 models for inference without cloning the repository. If you want to train F3 models or use the codebase for your own tasks, please install F3 locally by following the instructions below.
conda create -n f3 python=3.11
conda activate f3Install F3 locally:
git clone git@github.com:grasp-lyrl/fast-feature-fields.git
cd fast-feature-fields
pip install -e .Inference using pretrained F3 and its downstream variants [minimal.ipynb]
To get you up and running quickly, we can download a small sequence from M3ED and run some inference tasks on it with pretrained weights. Head over to [minimal.ipynb] to explore the inference pipeline for F3 and its downstream variants. This is the recommended way to get started. You can also load pretrained F3 models using torch.hub as shown below.
You can directly load pretrained F3 models using PyTorch Hub without cloning the repository:
import torch
model = torch.hub.load('grasp-lyrl/fast-feature-fields', 'f3',
name='1280x720x20_patchff_ds1_small',
pretrained=True, return_feat=True, return_logits=False)The name parameter can be replaced with any of the configuration names available under confs/ff/modeloptions/ (without the .yml extension).
Please refer to data/README.md for detailed instructions on setting up the datasets. This is important if you want to train F3 models on the M3ED, DSEC or MVSEC datasets. As an example, we show how to train an F3 model on the car urban daytime driving sequences of M3ED below. You can run the following command after setting up the car urban sequences of M3ED as per the instructions in data/README.md:
accelerate launch --config_file confs/accelerate_confs/2GPU.yml main.py\
--conf confs/ff/trainoptions/patchff_fullcardaym3ed_small_20ms.yml\
--compileWe provide training scripts and pretrained models for multiple downstream tasks. Each task has its own detailed README:
-
Monocular Depth Estimation: See
src/f3/tasks/depth/README.md -
Optical Flow Estimation: See
src/f3/tasks/optical_flow/README.md -
Semantic Segmentation: See
src/f3/tasks/segmentation/README.md
F3 can be easily integrated as a feature extractor for your custom tasks. The model outputs dense feature representations that can be fed to task-specific decoders. More instructions coming soon!
For high-performance deployment on edge devices (e.g., NVIDIA Jetson) or in C++ applications, F3 and its downstream models can be exported to PyTorch 2.x AOTI (Ahead-Of-Time Inductor) .pt2 format. This enables:
- Inference without Python dependencies
- Reduced latency and memory footprint
- Easy integration with ROS2 and C++ pipelines
See _aoti_pt2/README.md for detailed export and deployment instructions.
| Platform | Resolution | F3 | F3 + Depth | F3 + Flow |
|---|---|---|---|---|
| Desktop RTX 4090 | 1280x720 | 2.23 ms | 14.43 ms | 4.71 ms |
| Desktop RTX 4090 | 320x240 | 0.62 ms | 2.87 ms | 2.18 ms |
| Jetson Orin (JP 6.2) | 320 x240 | 4.6 ms | TBD | TBD |
Benchmarks using with 200 K events per batch, fp16/bf16 precision.
This section contains additional tools and scripts for dataset analysis, ground truth generation, and reproducibility of experiments.
Verify the temporal misalignment between events and semantic labels in the DSEC dataset, as discussed in the F3 paper:
python scripts/dsec_semantic_misalignment_test.pyThis script:
- Loads event data and semantic segmentation labels from DSEC
- Visualizes the temporal alignment between modalities
Generate optical flow ground truth from LiDAR point clouds for any camera in M3ED:
python src/f3/tasks/optical_flow/generate_gt.py \
--events_h5 /path/to/m3ed_events.h5 \
--depth_h5 /path/to/m3ed_depths.h5 \
--base_name name_for_output_file.h5This script:
- Loads LiDAR point clouds and camera poses from M3ED
- Computes egomotion from consecutive poses and depth
- Saves flow maps as HDF5 files with timestamps
- Optionally generates color-coded flow visualizations
See src/f3/tasks/optical_flow/README.md for detailed usage.
Generate rectified monocular depth maps from RGB/Grayscale images using DepthAnything V2 for any camera in M3ED:
python src/f3/tasks/depth/generate_depth.py \
--h5fn /path/to/input_h5 \
--out_h5fn /path/to/output_h5 \
--target prophesee \ # camera to warp to
--side left \ # side of the camera to warp to
--checkpoint /path/to/depthanythingv2_checkpoint.pthThis script:
- Loads RGB images from the source camera
- Generates depth predictions using DepthAnything V2
- Warps the depth maps to the target camera frame using camera calibration
- Saves the rectified depth maps as HDF5 files
See src/f3/tasks/depth/README.md for detailed usage.
scripts/generate_rectified_images.py: Generate undistorted images from raw M3ED camera streamsscripts/viz_gt_depth.py: Visualize depth ground truthscripts/viz_gt_flow.py: Visualize optical flow ground truthscripts/m3ed_viz.py: Visualize M3ED data
fast-feature-fields/
βββ main.py # Train FΒ³ models on event datasets
βββ everything.py # Run inference on all tasks simultaneously
βββ test_speed.py # Benchmark inference speed for FΒ³ and downstream tasks
βββ minimal.ipynb # Quick start notebook for inference
βββ hubconf.py # PyTorch Hub integration for loading pretrained models
β
βββ confs/ # Configuration files
β βββ ff/ # FΒ³ model, training, and data configurations
β βββ monocular_depth/ # Depth estimation training configs
β βββ optical_flow/ # Optical flow training configs
β βββ segmentation/ # Semantic segmentation training configs
β βββ everything/ # Multi-task joint inference configs
β βββ accelerate_confs/ # Multi-GPU training configurations
β
βββ src/f3/ # Core FΒ³ implementation
β βββ event_FF.py # FΒ³ model architecture
β βββ utils/ # Training utilities, data loading, visualization
β βββ tasks/ # Downstream task implementations
β βββ depth/ # Monocular depth estimation (see task README)
β βββ optical_flow/ # Optical flow estimation (see task README)
β βββ segmentation/ # Semantic segmentation (see task README)
β βββ matching/ # Feature matching utilities
β βββ robustness/ # Robustness evaluation tools
β
βββ scripts/ # Utility scripts
β βββ download/ # Dataset download scripts
β βββ setup/ # Dataset preprocessing and setup
β
βββ data/ # Dataset symlinks and setup instructions
β βββ README.md # Detailed dataset setup guide
β
βββ _aoti_pt2/ # PyTorch 2.x AOTI export for deployment
β βββ export_*.py # Export models to .pt2 format
β βββ _py_src/ # Python pt2 inference implementation
β βββ _cpp_src/ # C++ pt2 inference implementation
β
βββ outputs/ # Training outputs (auto-generated)
βββ {experiment_name}/ # Per-experiment directories
β βββ models/ # Model checkpoints
β βββ logs/ # Training logs
Each task directory (src/f3/tasks/{task}/) contains its own README with detailed training and evaluation instructions.
If you find this code useful in your research, please consider citing:
@misc{das2025fastfeaturefield,
title={Fast Feature Field ($\text{F}^3$): A Predictive Representation of Events},
author={Richeek Das and Kostas Daniilidis and Pratik Chaudhari},
year={2025},
eprint={2509.25146},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2509.25146},
}If you encounter any issues, please open an issue on the GitHub Issues page or contact sudoRicheek

