- Introduction
- Key Features
- Typical Workflow
- Project Structure
- Configuration
- How to Use
- Application Outputs
- Installation
- Distribution
- Further Development
- Licensing
This project provides a lightweight desktop application for the batch processing tasks of detection, classification, and manual review of images (trail camera), designed specifically for predator control in the New Zealand conservation context. The objective has been to develop a tool that can speed up the iterative development of tailored detection models that can be deployed to the field for alert notification of priority species. By reviewing and easily comparing pest detection against true pest occurence in images, a detection model can then be fine tuned to improve performance and accuracy.
The tool combines fast automated detection (via ONNX models) with a manual review interface, making it easy to standardise initial classification and then efficiently correct or confirm results. Both initial detection and manual review generate filing of image copies and summary csv file for both statistics and easily finding images that can be added to new builds of the detection model.
Although this tool has been developed for predator detection, it would be capable of being used for any custom onnx model with editing of the config.yaml file for classes specific to the model.
The current model supplied with this application has been developed on 7,000 images from approximately 60 trail cameras from inside New Zealand native bush. Class labels included in the supplied model are;
- Cat
- Possum
- Ferret
- Stoat
- Rat
- Kiwi
- Non Target
- Native Non Target
- Person
The confusion matrix for this model (20% validation split) is here.
-
ONNX-based object detection
-
No PyTorch or Ultralytics runtime required
-
Compact, fast, and suitable for standalone distribution
-
-
Config-driven class setup
-
Classes, display names, colours, priorities, and keyboard shortcuts are defined in config.yaml
-
Supports custom-trained ONNX models
-
-
Automated batch classification
-
Processes folders of images
-
Outputs both plain and annotated copies (optional)
-
Optional mirroring of sub-folder structure (e.g. camera or date based)
-
Confidence-based splitting (e.g. high / low confidence)
-
-
Interactive review & correction UI
-
Image-by-image navigation
-
Keyboard shortcuts for rapid reclassification
-
Side-by-side plain / annotated views
-
Undo support for recent actions
-
Resume review from where you left off
-
-
Structured outputs
-
Final folder structure containing reviewed images
-
Optional class-prefixed filenames
-
Summary CSV capturing:
-
model classification (i.e. initial detected class)
-
final reviewed classification
-
-
filenames and confidence scores
-
-
Run detection
-
Select the root input folder of images
-
Choose (create if needed) an output folder
-
Run automated classification
-
Initial counts and detection confidence are displayed
-
.csv file is generated with image filed folder, original and saved name, main detection (priority class), detection confidence, all detections found
-
-
Review results
-
Images are automatically loaded for review
-
Navigate through detections
-
Correct classifications where needed (mouse or keyboard with key association per class set in config.yaml)
-
Last reviewed image is saved so you can close and return to the same review spot on re-opening
-
-
Final outputs
-
A _final folder containing reviewed images
-
A CSV summary suitable for reporting or further analysis
-
app
|-- classifier_core.py
|-- classify_review_gui.py
`-- main.py
models
|-- config.yaml
`-- model.onnx
-
classifier_core - model loading, detection run and image filing
-
classify_review_gui - user interface using Python tkinter and customtkinter
-
model.onnx – your ONNX detection model
-
config.yaml – class; definitions, priorities, colours, and hotkeys
The application is intentionally model-agnostic.
-
You can:
-
Replace model.onnx with your own ONNX detection model
-
Edit config.yaml to define:
-
class names and display labels (This must match the class list and order for the onnx model used)
-
priority order for resolving multiple detections (e.g. cat detected takes priority over rat detected in same image)
-
actions which allow additional classes not currently in the detection model to be used when reviewing and filiing images
-
bounding box colours for annotated images created
-
preferred keyboard shortcuts
-
This allows the tool to adapt to different projects, species, or environments without code changes.
-
Classification Settings
- Input folder: the root folder where (any sub-folders and) images are filed
- Output folder: where the copies of images after detection should be saved
- Confidence threshold: set the confidence score at which detections are saved in high_conf or low_conf folders
- Mirror sub-folders / Single output tree: If sub-folder structure should be maintained (e.g. detections should be filed by camera) select 'Mirror' option
- Save plain copies: On by default. Select to save images into class detection plain folder
- Save annotated copies: On by default. Select to save images with bounding box and class label into class detection annotated folder
- Run classification: Start the classification and image filing
- Progress bar showing number of images scanned and filed out of all images found in Input folder
- EMA conf (detected): The average confidence score of all detections made so far
- Save Image Filename: gives additional prefixes that can be added to the ==saved== copies of each image
- Sub folder name: Useful if you want to maintain the camera name for example in the classifed image filename
- Save Datetime: If the image capture datetime is available in the original image, this can then be written in the saved image copy filename
-
Statistics
- Class counts: The number of detections made by class
- EMA: The average confidence score of that class during detection
- Final counts: The final number by class after review and any corrections made
-
Review and Correction Window
- Source folder for the filed detection images for reviewing (will be defaulted from Output folder after Run classification)
- Load Images: Will start at the first image of the first sub-folder, unless you are resuming a previously incomplete review session, in which case loaded image will be the next image still to be reviewed
- Prefix class in final filenames: Off by default. Select this if you want to prefix all image files with class name.
- << Prev / Next >>: Buttons for navigating images (also can use left / right arrow keys). Important When you navigate to the next image (button or ← / → keys), it is then taken that the current detected class is to be the final correct class. Using previous or left arrow will go back one image and undo the final correction class. At that point you can then either navigate right / next to confirm the correct class once again, or use a correction button / key to confirm the final correct class
- Annotated / Plain: Default view is of the annotated image. During 'Run classification' with default of save plain and annotated, you will end up with a mirrored folder structure. One with Plain and one with Annotated images. With both, you can toggle between the annotated or plain image to view the same image without any bounding box or label. Use ↑ / ↓ keys to toggle between the same image with and without annotation
-
User Actions
- Detected: The current class of the priority detected class in the image (high_conf / low_conf). Image navigation will go through each sub-folder (e.g. camera) at a time, then by class for that sub-folder and high_conf / low_conf within that class. This means that you should be reviewing images in 'batches' by the class that has been detected.
- Image filename
- Mark correction as: All classes possible to correct to. Can be altered in the config.yaml file including the shortcut key you would like to assign to that class
- Progress: Folders completed / Total Folders......Images completed / Total Images
- Note, in the example shown, the last image is in fact a rat (now corrected).
- For this demonstration, 1,095 images took about 15-20 minutes to run a full classification, review and save into final classification folder structure. The statistics give you some idea of the initial class detections, and final class counts. With further fine-tuning of the detection model, these numbers should get closer.
- The following is a snippet of the output from both running classification and then reviewing. The result is
- Input folder 'demo_images' containing 5 sub-folders (1 per camera)
- Output folder 'demo_images_output' now with a higher level annotation / plain duplication of the 5 original sub-folders
- below each sub-folder we now have a folder for each class detected and a high_conf / low_conf split
- note in this example, for cam1 there were 6 examples of person detected with high_conf (>80%), 11 rats with high_conf and 7 rats with low_conf (<80%)
- a classified_results.csv file
- Final Output folder 'demo_images_output_final' containing the same structure as 'demo_images_output' without high_conf / low_conf split as after final review, all images are deemed to be in their correct classification
- a classified_results.csv file (see below)
- a review_state.json file which records the users current position in reviewing images and allows starting back at the same place
demo_images
|-- cam1
| |-- 10150055.JPG
| |-- 10150056.JPG
| |-- 10150057.JPG
| |-- 10150058.JPG
| |-- 10150059.JPG
| |-- 10160060.JPG
| |-- 10160061.JPG
| |-- 10160062.JPG
| |-- 10160063.JPG
| |-- 10160064.JPG
| |-- 10170065.JPG
| |-- IMG_0256.JPG
| |-- IMG_0257.JPG
| |-- IMG_0258.JPG
| |-- IMG_0259.JPG
| |-- IMG_0260.JPG
| |-- IMG_0261.JPG
| |-- IMG_0262.JPG
| |-- PICT0009.JPG
| |-- PICT0010.JPG
| |-- PICT0011.JPG
| |-- PICT0012.JPG
| |-- PICT0013.JPG
| `-- PICT0014.JPG
|-- cam2
|-- cam3
|-- cam4
`-- cam5
demo_images_output
|-- annotated
| `-- cam1
| |-- person
| | `-- high_conf
| | |-- cam1_2025-10-14T11-18-00_PICT0010.JPG
| | |-- cam1_2025-10-14T17-58-02_PICT0011.JPG
| | |-- cam1_2025-10-16T08-32-18_10160060.JPG
| | |-- cam1_2025-10-16T10-28-55_10160061.JPG
| | |-- cam1_2025-10-16T10-41-50_PICT0012.JPG
| | `-- cam1_2025-10-19T15-36-18_PICT0014.JPG
| `-- rat
| |-- high_conf
| | |-- cam1_2025-10-15T19-38-07_10150055.JPG
| | |-- cam1_2025-10-15T20-03-39_10150056.JPG
| | |-- cam1_2025-10-15T20-38-52_10150057.JPG
| | |-- cam1_2025-10-15T21-35-15_10150058.JPG
| | |-- cam1_2025-10-15T23-02-11_10150059.JPG
| | |-- cam1_2025-10-17T02-37-58_10170065.JPG
| | |-- cam1_2025-11-12T00-36-20_IMG_0256.JPG
| | |-- cam1_2025-11-12T00-50-08_IMG_0257.JPG
| | |-- cam1_2025-11-12T02-07-56_IMG_0259.JPG
| | |-- cam1_2025-11-12T03-44-23_IMG_0260.JPG
| | `-- cam1_2025-11-12T22-34-39_IMG_0262.JPG
| `-- low_conf
| |-- cam1_2025-10-14T04-31-01_PICT0009.JPG
| |-- cam1_2025-10-16T20-18-13_10160062.JPG
| |-- cam1_2025-10-16T21-18-22_10160063.JPG
| |-- cam1_2025-10-16T21-43-05_10160064.JPG
| |-- cam1_2025-10-18T01-26-42_PICT0013.JPG
| |-- cam1_2025-11-12T01-30-46_IMG_0258.JPG
| `-- cam1_2025-11-12T22-12-24_IMG_0261.JPG
`-- plain
`-- cam1
|-- person
| `-- high_conf
| |-- cam1_2025-10-14T11-18-00_PICT0010.JPG
| |-- cam1_2025-10-14T17-58-02_PICT0011.JPG
| |-- cam1_2025-10-16T08-32-18_10160060.JPG
| |-- cam1_2025-10-16T10-28-55_10160061.JPG
| |-- cam1_2025-10-16T10-41-50_PICT0012.JPG
| `-- cam1_2025-10-19T15-36-18_PICT0014.JPG
`-- rat
|-- high_conf
| |-- cam1_2025-10-15T19-38-07_10150055.JPG
| |-- cam1_2025-10-15T20-03-39_10150056.JPG
| |-- cam1_2025-10-15T20-38-52_10150057.JPG
| |-- cam1_2025-10-15T21-35-15_10150058.JPG
| |-- cam1_2025-10-15T23-02-11_10150059.JPG
| |-- cam1_2025-10-17T02-37-58_10170065.JPG
| |-- cam1_2025-11-12T00-36-20_IMG_0256.JPG
| |-- cam1_2025-11-12T00-50-08_IMG_0257.JPG
| |-- cam1_2025-11-12T02-07-56_IMG_0259.JPG
| |-- cam1_2025-11-12T03-44-23_IMG_0260.JPG
| `-- cam1_2025-11-12T22-34-39_IMG_0262.JPG
`-- low_conf
|-- cam1_2025-10-14T04-31-01_PICT0009.JPG
|-- cam1_2025-10-16T20-18-13_10160062.JPG
|-- cam1_2025-10-16T21-18-22_10160063.JPG
|-- cam1_2025-10-16T21-43-05_10160064.JPG
|-- cam1_2025-10-18T01-26-42_PICT0013.JPG
|-- cam1_2025-11-12T01-30-46_IMG_0258.JPG
`-- cam1_2025-11-12T22-12-24_IMG_0261.JPG
|-- classified_results.csv
demo_images_output_final
|-- annotated
| `-- cam1
| |-- person
| | |-- cam1_2025-10-14T11-18-00_PICT0010.JPG
| | |-- cam1_2025-10-14T17-58-02_PICT0011.JPG
| | |-- cam1_2025-10-16T08-32-18_10160060.JPG
| | |-- cam1_2025-10-16T10-28-55_10160061.JPG
| | |-- cam1_2025-10-16T10-41-50_PICT0012.JPG
| | `-- cam1_2025-10-19T15-36-18_PICT0014.JPG
| `-- rat
| |-- cam1_2025-10-14T04-31-01_PICT0009.JPG
| |-- cam1_2025-10-15T19-38-07_10150055.JPG
| |-- cam1_2025-10-15T20-03-39_10150056.JPG
| |-- cam1_2025-10-15T20-38-52_10150057.JPG
| |-- cam1_2025-10-15T21-35-15_10150058.JPG
| |-- cam1_2025-10-15T23-02-11_10150059.JPG
| |-- cam1_2025-10-16T20-18-13_10160062.JPG
| |-- cam1_2025-10-16T21-18-22_10160063.JPG
| |-- cam1_2025-10-16T21-43-05_10160064.JPG
| |-- cam1_2025-10-17T02-37-58_10170065.JPG
| |-- cam1_2025-10-18T01-26-42_PICT0013.JPG
| |-- cam1_2025-11-12T00-36-20_IMG_0256.JPG
| |-- cam1_2025-11-12T00-50-08_IMG_0257.JPG
| |-- cam1_2025-11-12T01-30-46_IMG_0258.JPG
| |-- cam1_2025-11-12T02-07-56_IMG_0259.JPG
| |-- cam1_2025-11-12T03-44-23_IMG_0260.JPG
| |-- cam1_2025-11-12T22-12-24_IMG_0261.JPG
| `-- cam1_2025-11-12T22-34-39_IMG_0262.JPG
|-- plain
| `-- cam1
| |-- person
| | |-- cam1_2025-10-14T11-18-00_PICT0010.JPG
| | |-- cam1_2025-10-14T17-58-02_PICT0011.JPG
| | |-- cam1_2025-10-16T08-32-18_10160060.JPG
| | |-- cam1_2025-10-16T10-28-55_10160061.JPG
| | |-- cam1_2025-10-16T10-41-50_PICT0012.JPG
| | `-- cam1_2025-10-19T15-36-18_PICT0014.JPG
| `-- rat
| |-- cam1_2025-10-14T04-31-01_PICT0009.JPG
| |-- cam1_2025-10-15T19-38-07_10150055.JPG
| |-- cam1_2025-10-15T20-03-39_10150056.JPG
| |-- cam1_2025-10-15T20-38-52_10150057.JPG
| |-- cam1_2025-10-15T21-35-15_10150058.JPG
| |-- cam1_2025-10-15T23-02-11_10150059.JPG
| |-- cam1_2025-10-16T20-18-13_10160062.JPG
| |-- cam1_2025-10-16T21-18-22_10160063.JPG
| |-- cam1_2025-10-16T21-43-05_10160064.JPG
| |-- cam1_2025-10-17T02-37-58_10170065.JPG
| |-- cam1_2025-10-18T01-26-42_PICT0013.JPG
| |-- cam1_2025-11-12T00-36-20_IMG_0256.JPG
| |-- cam1_2025-11-12T00-50-08_IMG_0257.JPG
| |-- cam1_2025-11-12T01-30-46_IMG_0258.JPG
| |-- cam1_2025-11-12T02-07-56_IMG_0259.JPG
| |-- cam1_2025-11-12T03-44-23_IMG_0260.JPG
| |-- cam1_2025-11-12T22-12-24_IMG_0261.JPG
| `-- cam1_2025-11-12T22-34-39_IMG_0262.JPG
|-- classified_results.csv
`-- review_state.json
Summary Output
- classified_results.csv
- folder name if sub-folders where specified (Mirrored) in Run classification
- original filename:
- saved filename: merges folder name and filename as a safety in case multiple images have the same file name as well as datetime stamp found for image
- datetime_stamp: if datetime metadata is found in the image, then this is written here
- could be parsed for example to give class counts per calendar day, week of year etc.
- main label: the initial detected class
- main_confidence_pct: percentage confidence the model had during the detection for that image (note non_detection is always 0% as a detection calculation isn't actually made)
- all detection: if multiple classes were detected, they will be listed here
- final_filename:
- final classification: the final classification after review (in this example, one non_detection image has been corrected to 'rat' during review)
This file can be useful for reporting on pest incidence rates as well as finding images that will be useful for fine-tuning future models. For example you could filter for all images where main label <> final classification to correct for classes. Examples I get are cat = possum. cat = kiwi. stoat = rat. Also highly recommended are gathering images where there was an initial detection but final classification is no detection. These images should be added to future training to reinforce to the plain background where there are in fact no animals present
Options for installing and running
- Requires
- Windows 10 +
- x64 CPU
- Run from source (recommended for devs): Python3.10+ + pip install -r requirements.txt
- Standalone executable (recommended for users without Python):
- Go to the Releases page and download the latest PredatorClassifier-...windows-x64.zip
- Extract the zip, ensure you keep the models/ folder next to the .exe, then run the .exe
- Build it yourself: Pyinstaller recommended
A standalone Windows executable is provided via GitHub Releases.
The executable:
-
Does not require Python to be installed
-
Loads models and configuration from the bundled models folder
-
Can be redistributed within teams or volunteer groups
This tool was developed to support conservation and predator-control programmes by:
-
speeding up image review workflows
-
reducing manual classification effort
-
maintaining consistency across reviewers
-
enabling continuous improvement of detection models through corrected data
It would be exciting to get any feedback on how well the supplied onnx model can detect predators from your own trapping projects. I suspect that depending on the environment in which your trail cameras are set (bush/background, lighting, distance to bait station etc) will be a factor in how well this peforms for you. I would also be very appreciative of any trail camera images that you can supply of rarely pictured animals, especially ferrets and kiwis that I could include in further model training so that the publicly available onnx model supplied here can be improved for detection of these species.
Please follow this project for further updates as I will also be publishing two further github repos.
-
Firstly, one a how to guide for training (then enhancing) your own detection model with YOLO (exporting to onnx) that would be specific to your own environment that could then be used with this application
-
Secondly, a repository for a how to guide for building and deploying a remote detection service paired to a trail camera that can be configured to check for images and send sms alerts on schedule for any high priority animals. I have already developed the software side of this project and am in the process of finishing a prototype for testing
The application code is licensed under the MIT License - see LICENSE


