For identification of competition items
Python 3.10,use pip install -r requirements.txt
Download 'sam_vit_b_01ec64.pth' to .
If you use the conda yml file, make sure to install LangSAM and SAM2 manually from github.
Currently only supports live sampling through a Realsense camera.
You may change the directory of saved samples in .env. Ensure the folder under DATASET_DIR is empty (otherwise old files will be mixed into your new data).
-
Enter the labels and their corresponding GroundingDINO prompts in
resources/ontology.json:{"<GroudingDINO prompt>" : "label"} -
Connect Realsense camera to computer using USB cable。
-
cd into
yolo_tuningand activate your conda environmentconda activate visionTrain
4 If you wish to train regular YOLO (bounding box only), use python -m create_dataset, otherwise, for YOLO-seg, use python -m create_dataset_seg to start construction your dataset
-
An OpenCV window should pop up, follow the instructions shown in terminal for a smooth dataset creation process!
-
标定完后按
q结束。
-
cd into
yolo_tuning -
use
python -m prepare_datasetto split the dataset into YOLO-appropriate format -
Use
python -m tune_YOLOv11orpython -m tune_YOLOv11_segto start training -
The finishe best segmentation shall be saved to
yolo_finetuned_best.ptoryolo_seg_finetuned_best.pt
Plug in realsense, cd into yolo_tuning, and use python -m test_new_model or python -m test_new_model_seg to test your newly trained model live!