AI Engineer | PhD in Computer Engineering | Explainable AI Researcher
I build interpretable AI systems for video understanding and develop tools that make AI decisions transparent and trustworthy.
At the intersection of Computer Vision, Explainable AI, and Agentic Systems. My work focuses on:
- Interpretable deep learning for video analysis
- Transformer architectures with attention attribution
- Spatio-temporal understanding in video models
- Adversarial robustness and AI trustworthiness
- AI systems that communicate their reasoning
A novel XAI method for interpreting video Transformer models that provides both spatial and temporal explanations in a single forward pass with <3% computational overhead.
- Published in IEEE Access (2025)
- Paper
A microservice framework for early adoption of Explainable AI in MLOps pipelines, enabling cloud-agnostic XAI operations.
Open API architecture for discovering trustworthy explanations of cloud AI services without exposing model internals.
- Published in IEEE Transactions on Cloud Computing (2024)
- Paper
10 peer-reviewed papers in top venues including:
- ICSE (A* Conference)
- IEEE Transactions on Cloud Computing (Q1 Journal, IF: 5.3)
- IEEE Access (Q1 Journal)
- ACM TOMM (Under Review)
- IEEE SSE, COMPSAC, IEEE Big Data
View all publications on Google Scholar
- XAIport - Explainable AI service framework for cloud and open-source models
- XAIpipeline - Automated orchestration of XAI workflows
- Peer Reviewer: 20+ manuscripts for IEEE Transactions and AAAI Conference
- Workshop Lead: CASCON 2024 - "Develop Explainable AI Services on Cloud Computing"
- Member: IEEE Computer Society, ACM
- PhD in Computer Engineering, Concordia University, Canada (2025)
- MSc in Process System Engineering, TU Dortmund, Germany
- BSc in Process System Engineering, CUMT, China
"Making AI transparent, trustworthy, and accountable."


