A production-ready tomato plant disease classification system using a deep learning CNN model trained on the PlantVillage dataset. The project includes web, mobile, and cloud support with monitoring, testing, and a GUI-based infrastructure controller.
- Multiclass CNN model trained to classify 10 tomato plant diseases
- Achieved 98.51% test accuracy
- Input shape:
(256, 256, 3)| Batch size:32| 50 epochs
- Local Inference via
.kerasmodel - TensorFlow Serving using Docker container
- GCP Cloud Function for remote inference
- Integrated Prometheus + Grafana via Docker Compose
- Custom dashboards (JSON-exported & version-controlled)
- Unit tests for training, backend utils, config, routing
- Integration tests covering all API endpoints
- CI/CD with GitHub Actions — parallel execution of all tests
- Frontend (React): Upload and predict via web UI (env-driven)
- Mobile App (React Native CLI): Supports camera + image picker
- Cross-platform GUI (tkinter): Manage API, TF Serving, monitoring, logs,
.env, and project export
- Unified environment setup via
.env(root + frontend) - Start/Stop automation scripts using
make,.sh,.bat, and GUI launcher
The model is a Convolutional Neural Network (CNN) trained on the PlantVillage dataset to classify 10 types of tomato plant diseases, including healthy leaves.
- Framework: TensorFlow + Keras
- Input Shape:
(256, 256, 3) - Batch Size:
32 - Epochs:
50 - Optimizer:
adam - Loss Function:
SparseCategoricalCrossentropy - Test Accuracy:
98.51%
[
'Bacterial_spot', 'Early_blight', 'Late_blight', 'Leaf_Mold',
'Septoria_leaf_spot', 'Spider_mites_Two_spotted_spider_mite',
'Target_Spot', 'YellowLeaf__Curl_Virus', 'mosaic_virus', 'healthy'
]All plots are saved under training/plots/
Accuracy & Loss |
|
Confusion Matrix |
|
Classification Heatmap |
|
Per-Class Radar Plot |
|
Classification Report |
api/
├── main.py # FastAPI app entrypoint
├── model_local.py # Local model inference
├── model_tf_serving.py # TF Serving inference
├── config.py # Reads .env
├── utils.py # Preprocessing utils
└── logs/app.log
training/
└── utils/ # Model training utilities
tests/
├── unit/ # Unit tests for utils, config, etc.
└── integration/ # Integration tests for endpoints
dashboards/
├── docker-compose.yaml # Prometheus + Grafana stack
├── prometheus.yaml # Scrape configs
└── grafana/ # JSON dashboards
scripts/
├── start_tf_serving.sh/.bat
├── stop_tf_serving.sh/.bat
├── start_monitoring.sh/.bat
└── stop_monitoring.sh/.bat
frontend/ # React app
└── .env (REACT_APP_USE_GCP, etc.)
mobile/ # React Native App
└── React Native CLI app
launcher.py # GUI launcher
launcher.exe # executable launcher
make, make.bat # Unified CLI workflow
| Mode | How to Activate | Backend Used |
|---|---|---|
| Local | USE_TF_SERVING=False |
.keras model |
| TensorFlow Serving | USE_TF_SERVING=True |
Docker container |
| GCP Cloud | REACT_APP_USE_GCP=True (frontend) |
Cloud Function |
All configured via the root-level .env
|
Features:
|
|
|
Features:
|
python installer/launcher.pyOr use the prebuilt launcher.exe for Windows
# Run tests
make test-unit
make test-integration
# Unit tests (training, utils, routing)
pytest tests/unit
# Integration tests (real image input)
pytest tests/integrationLogs are saved in api/logs/app.log
CI powered by GitHub Actions — both unit and integration tests run in parallel
Use the locally saved .keras model for prediction
# Ensure this is set in .env
USE_TF_SERVING=FalseServe the model using TensorFlow Serving inside a Docker container
# Set in .env
USE_TF_SERVING=True
# Start the TF Serving container
sh scripts/start_tf_serving.sh # or start_tf_serving.bat
# Then launch the API
python -m api.mainTo stop TF Serving:
sh scripts/stop_tf_serving.sh # or stop_tf_serving.batEnable observability and metrics tracking
# Set in .env
ENABLE_METRICS=True
# Start monitoring stack
sh scripts/start_monitoring.sh # or start_monitoring.bat
# Alternatively, use Docker Compose directly
docker-compose -f dashboards/docker-compose.yaml upTo stop monitoring services
scripts/stop_monitoring.sh # or stop_monitoring.batIn frontend/.env
REACT_APP_USE_GCP=TrueAll monitoring stack (Prometheus + Grafana) and optional backend services can be started with Docker Compose.
1. Start all services
docker compose up -d2. Stop all services
docker compose downDeploy the PlantShield AI stack to any Kubernetes cluster.
1. Create namespace
kubectl create namespace plantshield2. Apply manifests
kubectl apply -k k8s/base3. Check pods
kubectl get pods -n plantshield4. Expose services
You can use NodePort, LoadBalancer, or an Ingress controller based on your environment.
A Helm chart is provided to package and deploy the entire PlantShield AI application stack.
1. Install chart
helm install plantshield helm/plantshield --namespace plantshield --create-namespace2. Upgrade release
helm upgrade plantshield helm/plantshield -n plantshield3. Uninstall
helm uninstall plantshield -n plantshieldContributions are welcome! Feel free to Open issues for bugs, enhancements, or questions. Submit pull requests with new features or fixes, Improve test coverage or monitoring setup
Follow a structured branching model
main– stable production-ready branchdev– active development integration branchfeature/*– for new features
e.g.,feature/frontend,feature/deployment/gcp,feature/testing/unitdocs/*– for documentation changes
e.g.,docs/setup-meta-filesci/*– for CI/CD and automation
e.g.,ci/setup-github-actions
📌 Please use meaningful commit messages following the Conventional Commits style.








