I build machine learning systems and products grounded in research thinking,
but designed with real-world constraints, deployment realities, and failure modes in mind.
I have done deep work in Graph Neural Networks and financial fraud research,
and I now focus on building end-to-end ML products — from problem framing
and system design to modeling, evaluation, and iteration.
I care less about models in isolation and more about systems that actually work.
Research remains my foundation — product impact is the goal.
- Applied Machine Learning & ML Engineering
- Graph-based Learning (Fraud, Anomalies, Structured Data)
- Predictive & Industrial ML Systems
- Evaluation-driven AI and failure analysis
| Project | Focus | Notes |
|---|---|---|
| TRDGNN | 🔴 Flagship: Temporal GNNs | Bitcoin fraud detection with multiple architectural contributions and publication-ready analysis |
| Research-Paper-Analyzer | 🧠 LLM Product | PDF → structured JSON with grounding, numeric consistency, and latency constraints |
| GraphTabular-FraudFusion | 📉 Failure Analysis | Rigorous negative-result study showing when graph embeddings do not improve XGBoost |
These projects represent my research backbone — the same rigor I now apply to product-focused ML systems.
- PyTorch Geometric (GNNs)
- Feature engineering & evaluation pipelines
- Ablation studies and failure analysis
Used in product-oriented ML systems for serving, experimentation, and iteration.
Research teaches why.
Engineering decides what survives reality.
Products demand both.
I optimize for clarity, correctness, and long-term usefulness — not hype.
📫 Connect
- GitHub: https://github.com/BhaveshBytess
- LinkedIn: www.linkedin.com/in/bhavesh-ai

