PRESTO is a novel deep transfer learning framework designed to forecast satellite ephemeris and clock errors for the NavIC (Navigation with Indian Constellation) system.
Built to solve the challenge of extreme data scarcity (training on only 7 days of history), PRESTO successfully generates stable 24-hour forecasts where the error residuals conform to a near-perfect normal distribution—the gold standard for GNSS error modeling.
Note: This project was developed as a solution to SIH 2025 PS ID SIH25176.
In satellite navigation, minimizing the Mean Squared Error (MSE) isn't enough. The ultimate goal is to strip away all systematic, predictable error components (trends, seasonality, orbital perturbations) until only pure, random noise remains.
Our Success Metric: A prediction distribution that is statistically indistinguishable from a Gaussian Normal Distribution.
| The Goal | The Result (PRESTO-GEO) |
|---|---|
| Eliminate systematic bias |
|
| Normalize residuals |
Shapiro-Wilk |
The distribution of PRESTO's out-of-sample predictions (blue) aligns perfectly with the theoretical normal curve (black dashed).
PRESTO is not a monolithic model. It is a hybrid "divide and conquer" system designed to handle specific aspects of the satellite signal:
- Spatio-Temporal Extraction (GNN): We model the 4 error channels (
x,y,z,clock) as nodes in a graph. A Graph Attention Network (GAT) learns the latent physical dependencies (e.g., how orbital drag affects clock drift). - Semiparametric Decomposition: Satellite errors are a composite of slow drifts and fast noise. We use a quadratic model to isolate and subtract the long-term trend.
- Residual Forecasting (Autoformer): The remaining high-frequency signal is complex and chaotic. We use an Autoformer with Auto-Correlation mechanisms to discover periodicity and forecast these residuals.
Training a Transformer on 145 data points is impossible. We solved this with a novel transfer learning pipeline:
- Synthetic Data Generation: We fine-tuned a Large Time-series Model (TimeR-XL) on the cleaned ISRO data to generate a synthetic dataset 100x larger than the original, capturing the unique statistical "dialect" of the satellites.
- Pre-Training: PRESTO is trained from scratch on this massive synthetic dataset, learning generalized error dynamics without overfitting.
- Fine-Tuning: The model is finally refined on the real 7-day dataset using K-Fold Cross-Validation.
PRESTO/
├── Datasets/
│ ├── Cleaned_Datasets/ # Preprocessed 15-min interval data
│ ├── Synthetic_Datasets/ # 100x augmented data generated by TimeR-XL
│ └── Forecasted_Datasets/ # Final 8th-day predictions
├── Notebooks/
│ ├── Data_Preprocessing.ipynb # Resampling, interpolation, outlier fix
│ ├── Synthetic_Data_Generation.ipynb # Fine-tuning TimeR-XL
│ ├── PRESTO_Pre_Train.ipynb # Training on synthetic data
│ └── PRESTO_Fine_Tune.ipynb # Transfer learning to real ISRO data
├── Plots/ # Visualizations of results and analysis
├── PRESTO_Weights/ # Saved model states (.pth)
└── PRESTO_Technical_Deep_Dive.pdf # Full technical report
Raw satellite data is noisy and leptokurtic (heavy-tailed). Our preprocessing pipeline regularizes this without destroying signal integrity.
PRESTO generates physically plausible, stable forecasts for the unseen 8th day with well-calibrated confidence intervals.

Note on Dependencies:
This project does not use a strictly pinned requirements.txt. Instead, all necessary libraries are installed directly within the first cells of the Jupyter Notebooks using !pip install.
Core Libraries Used:
torch(PyTorch)torch_geometric(GNN layers)OpenLTM(For TimeR-XL)scikit-learn,pandas,numpy,scipy
How to Run:
- Clone the repository.
- Navigate to the
Notebooks/folder. - Run the notebooks in sequential order (Data_Preprocessing -> Synthetic_Data_Generation -> PRESTO_Pre_Train -> PRESTO_Fine_Tune).
This project was a collaborative research effort.
*These authors contributed equally to this work.
This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License.
See the LICENSE file for details.

