- β¨ Introduction
- π― Key Features
- π¦ Installation
- π Quickstart
- π» Usage
- π§ Troubleshooting
- π Architecture
- π Citation
- π License
- ποΈ Contributors
- ποΈ Acknowledgements
DeepRM is a deep learning-based framework for RNA modification detection using Nanopore direct RNA sequencing. This repository contains the source code for training and running DeepRM.
- High accuracy: Achieves state-of-the-art accuracy in RNA modification detection and stoichiometry measurement.
- Single-molecule resolution: Provides single-molecule level predictions for RNA modifications.
- End-to-end pipeline: Easy-to-use pipeline from raw reads to site-level predictions.
- Customizable: Supports training of custom models.
- Linux x86_64
- Python 3.9+
- Pytorch 2.3+
- https://pytorch.org/get-started/locally/
- Please ensure that you have installed the correct version of PyTorch with CUDA support if you want to use GPU for inference or training.
-
Torchmetrics 0.9.0+ (only for training)
-
python -m pip install torchmetrics
-
-
Dorado 0.7.3+ (optional, for basecalling)
-
SAMtools 1.16.1+ (optional, for BAM file processing)
-
Python package requirements are listed in
requirements.txtand will be installed automatically when you install DeepRM.
- Estimated time: ~10 minutes
- Install via PIP (recommended)
python -m pip install deeprm- Install from source (GitHub)
git clone https://github.com/vadanamu/deeprm
cd deeprm
python -m pip install -U pip
python -m pip install -e .- If installation fails on old OS (e.g., CentOS 7) due to NumPy, you can try installing older versions of NumPy first:
-
python -m pip install "numpy<2.3.0,>2.0.0" python -m pip install -e .
deeprm --version
deeprm check- If everything is installed correctly, you should see the version of DeepRM and a message indicating that the installation is successful.
- If you encounter CUDA or torch-related errors, make sure you have installed the correct version of PyTorch with CUDA support.
- DeepRM can use a C++-based preprocessing tool for acceleration, which is both provided as a precompiled binary and source code.
- Depending on your system configuration, you may need to build the C++ preprocessing tool from source, located in the
cppdirectory of the DeepRM repository. - Please refer to the cpp/README.md page for detailed build instructions.
- For demonstration purposes, you can use examples POD5 and BAM files provided in the
examplesdirectory of the repository. - You can also use your own POD5 and BAM files.
- Estimated time: ~1 hours
1οΈβ£ Prepare data
deeprm call prep -p inference_example.pod5 -b inference_example.bam -o <prep_dir>- (Alternative) To supply your own POD5 file:
dorado basecaller --reference <ref_fasta> --min-qscore 0 --emit-moves rna004_130bps_sup@v5.0.0 <pod5_dir> \ | tee >(samtools sort -@ <threads> -O BAM -o <bam_path> - && samtools index -@ <threads> <bam_path>) \ | deeprm call prep -p <pod5_dir> -b - -o <prep_dir>
- If Dorado fails due to "illegal memory access", try adding
--chunksize <chunk_size>option (e.g., chunk_size=12000).
- If Dorado fails due to "illegal memory access", try adding
2οΈβ£ Run inference
deeprm call run -b inference_example.bam -i <prep_dir> -o <pred_dir> -s 1000- Adjust the
-s(batch size) parameter according to your GPU memory capacity (default: 10000). - Expected output file:
- Site-level detection result file (.bed)
- Molecule-level detection result file (.npz)
- Estimated time: ~1 hours
1οΈβ£ Prepare unmodified & modified training data
deeprm train prep -p training_a_example.pod5 -b training_a_example.bam -o <prep_dir>/a
deeprm train prep -p training_m6a_example.pod5 -b training_m6a_example.bam -o <prep_dir>/m6a2οΈβ£ Compile training data
deeprm train compile -n <prep_dir>/a/data -p <prep_dir>/m6a/data -o <prep_dir>/compiled3οΈβ£ Run training
deeprm train run -d <prep_dir>/compiled -o <output_dir> --batch 64- Adjust the
--batchparameter according to your GPU memory capacity (default: 1024). - Expected output file:
- Trained DeepRM model file (.pt)
- This method uses precompiled C++ binary for accelerating the preprocessing step.
dorado basecaller --reference <ref_fasta> --min-qscore 0 --emit-moves rna004_130bps_sup@v5.0.0 <pod5_dir> \ | tee >(samtools sort -@ <threads> -O BAM -o <bam_path> - && samtools index -@ <threads> <bam_path>) \ | deeprm call prep -p <pod5_dir> -b - -o <prep_dir>
- If Dorado fails due to "illegal memory access", try adding
--chunksize <chunk_size>option (e.g., chunk_size=12000). - If the precompiled binary does not work on your system, please refer to the cpp/README.md page for detailed build instructions.
- Adjust the
-g (--filter-flag)parameter according to your needs. If using a genomic reference, you may want to use-g 260.
-
This method is slower than the accelerated preparation method, but is supported for cases such as:
- The POD5 files are already basecalled to BAM files with move tags.
- You want to run basecalling and preprocessing in separate machines.
-
Basecall the POD5 files to BAM files with move tags (skip if already done):
- If Dorado fails due to "illegal memory access", try adding
--chunksize <chunk_size>option (e.g., chunk_size=12000).
- If Dorado fails due to "illegal memory access", try adding
dorado basecaller --reference <reference_path> --min-qscore 0 --emit-moves rna004_130bps_sup@v5.0.0 <pod5_dir> > <raw_bam_path>"- Filter, sort, and index the BAM files:
- Adjust the
-Fparameter according to your needs. If using a genomic reference, you may want to use-F 260.
- Adjust the
samtools view -@ <threads> -bh -F 276 -o <bam_path> <raw_bam_path>
samtools sort -@ <threads> -o <bam_path> <bam_path>
samtools index -@ <threads> <bam_path>- To preprocess the inference data (transcriptome), run the following command:
deeprm call prep -p <input_POD5_dir> -b <bam_path> -o <prep_dir>- This will create the npz files for inference.
- The trained DeepRM model file is attached in the repository:
weight/deeprm_weights.pt. - For inference, run the following command:
- Adjust the
-s(batch size) parameter according to your GPU memory capacity (default: 10000).
- Adjust the
deeprm call run --model <model_file> --data <data_dir> --output <prediction_dir> --gpu-pool <gpu_pool>- This will create a directory with the site-level and molecule-level result files.
- Optionally, if you used a transcriptomic reference for alignment, you can convert the result to genomic coordinates by supplying a RefFlat/GenePred/RefGene file (
--annot <annotation_file>).
- The output BED file follows the standard bedMethyl format. Please see https://genome.ucsc.edu/goldenpath/help/bedMethyl.html for description.
- Please note that columns 14 to 18 are zero-filled for compatibility. These columns will be used for a planned future update.
- The output BAM file contains modification information in MM and ML tags. Please see https://samtools.github.io/hts-specs/SAMtags.pdf for description.
- The output NPZ file contains the following arrays:
1. read_id
2. label_id
3. pred: modification score (between 0 and 1)
- Read ID specification:
- The UUID4 format read ID (128 bits) is converted to two 64-bit integers for NumPy compatibility.
- You can convert the two 64-bit integers back to UUID4 using the following Python code:
import numpy as np import uuid def int_to_uuid(high, low): return uuid.UUID(bytes=b"".join([high.tobytes(),low.tobytes()]))
- Label ID specification:
- Label ID contains the reference, position, and strand information.
- You can decode the label ID using the following Python code:
import numpy as np def decode_label_id(label_id, label_div = 10**9): strand = np.sign(label_id) label_id_abs = np.abs(label_id) - 1 ref_id = label_id_abs // label_div pos = label_id_abs % label_div return ref_id, pos, strand
- Reference ID is extracted from the input BAM file header.
- You can skip this step if your POD5 files are already basecalled to BAM files with move tags.
dorado basecaller --min-qscore 0 --emit-moves rna004_130bps_sup@v5.0.0 <pod5_dir> > <bam_path>
samtools index -@ <threads> <bam_path>- To preprocess the training data (synthetic oligonucleotide), run the following command:
deeprm train prep --input <input_POD5_dir> --output <output_file>- This will create:
- Training dataset: /block
- To compile the training dataset, run the following command:
deeprm train compile --input <input_POD5_dir> --output <output_file>- This will create:
- Training dataset: /block
- To train the model, run the following command:
deeprm train run --model deeprm_model --data <data_dir> --output <output_dir> --gpu-pool <gpu_pool>- Adjust the
--batchparameter according to your GPU memory capacity (default: 1024). - This will create a directory with the trained model file.
- If installation fails on old OS (e.g., CentOS 7) due to a NumPy-related error, you can try installing older versions of NumPy first:
python -m pip install "numpy<2.3.0,>2.0.0" python -m pip install -e .
- If you encounter CUDA or torch-related errors, make sure you have installed the correct version of PyTorch with correct CUDA version support.
- If Dorado fails due to "illegal memory access", try adding
--chunksize <chunk_size>option (e.g., chunk_size=12000). - If DeepRM call fails due to memory error, try reducing the batch size (
-soption, default: 10000). - If DeepRM train fails due to memory error, try reducing the batch size (
--batchoption, default: 1024). - If DeeepRM call preprocess fails due to
libssl.so.1.1not found error in newer versons of Ubuntu, try installinglibssl1.1package:- The libssl file can be found at: https://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl
wget <libssl_file> sudo dpkg <libssl_file>
- If DeepRM call preprocess fails due to memory error, try reducing the number of threads (
-toption), the preprocessing batch size (-noption), or the output chunk size (-koption). - If DeepRM train does not output training-related metrics, try installing
torchmetricspackage:python -m pip install torchmetrics
If you use DeepRM in your research, please cite the following paper:
:class: nohighlight
@article{
title={Comprehensive single-molecule resolution discovery of m6A RNA modification sites in the human transcriptome},
author={Gihyeon Kang, Hyeonseo Hwang, Hyeonseong Jeon, Heejin Choi, Hee Ryung Chang, Nagyeong Yeo, Junehee Park, Narae Son, Eunkyeong Jeon, Jungmin Lim, Jaeung Yun, Wook Choi, Jae-Yoon Jo, Jong-Seo Kim, Sangho Park, Yoon Ki Kim, Daehyun Baek},
journal={Nature Communications},
year={2025},
volume={In press},
publisher={Springer Nature}
doi={10.1038/s41467-025-67417-w}
The article is fully open access and available at https://doi.org/10.1038/s41467-025-67417-w

DeepRM is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
by Seoul National University R&DB Foundation and Genome4me Inc.
See the LICENSE file for details.
This repository is developed and maintained by the following organization:
- Laboratory of Computational Biology, School of Biological Sciences, Seoul National University
- Principal Investigator: Prof. Daehyun Baek
- Genome4me, Inc., Seoul, Republic of Korea
This study was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT, Republic of Korea (MSIT) (RS-2019-NR037866, RS-2020-NR049252, RS-2020-NR049538, and RS-2022-NR067483), by a grant of Korean ARPA-H Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (RS-2025-25422732), by Artificial Intelligence Industrial Convergence Cluster Development Project funded by MSIT and Gwangju Metropolitan City, by National IT Industry Promotion Agency (NIPA) funded by MSIT, and by Korea Research Environment Open Network (KREONET) managed and operated by Korea Institute of Science and Technology Information (KISTI).



