Panoramic dental radiograph detector/segmenter built on YOLOv8 with optional tooth-status classification and FDI numbering. The repo ships only code, the latest detector artifacts, and test history—no bulky caches.
- Single entrypoints for detection, FDI numbering, and optional classification (
src/pipeline/inference.py). - Reproducible data prep (VIA → YOLO, merge with COCO) plus benchmarking/plotting helpers in
scripts/. - Latest detector artifacts in
workspace/detectors/pano-yolo-merged-cpu/(PT/ONNX/ONNX-slim + plots). - Validation snapshots in
workspace/validation/pano-yolo-merged-cpu-test6/with JSON outputs and visual batches. - Clean Python package under
src/with configs and modular data/model/training components.
src/config.py– shared configs + YAML override loader.src/data/– VIA parser, datasets, and Albumentations transforms.src/models/– YOLO wrapper plus ResNet/EfficientNet classifier heads.src/training/train_detector.py– YOLOv8 detector training entrypoint.src/training/train_classifier.py– optional tooth-status classifier trainer.src/pipeline/inference.py– detection + optional classification CLI with FDI numbering.scripts/prepare_yolo_dataset.py– convert VIA annotations into YOLO folders anddental.yaml.scripts/build_merged_yolo_dataset.py– merge VIA and Roboflow COCO datasets.scripts/benchmark_detector.py,scripts/plot_training_metrics.py– runtime and metric plotting helpers.workspace/– latest detector + validation history kept after cleanup.
Plots from the latest run:
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
export PYTHONPATH=$(pwd)- Convert VIA labels to YOLO segmentation:
PYTHONPATH=$(pwd) python scripts/prepare_yolo_dataset.py \
--images-dir data/via \
--annotations data/via/annotations.json \
--output-dir workspace/yolo \
--task segment --train-ratio 0.75 --val-ratio 0.15 --seed 1337- (Optional) Merge with the Roboflow COCO export stored in
data/coco/:
PYTHONPATH=$(pwd) python scripts/build_merged_yolo_dataset.py \
--coco-root data/coco \
--via-images data/via \
--via-annotations data/via/annotations.json \
--output-dir workspace/yolo_merged- Train detector (YOLOv8s-seg, 80 epochs, 1024px, etc.):
PYTHONPATH=$(pwd) python -m src.training.train_detector \
--data-yaml workspace/yolo/dental.yaml \
--model yolov8s-seg.pt \
--project workspace/detectors \
--name pano-yolo-merged-cpu- Plot metrics for any run:
PYTHONPATH=$(pwd) python scripts/plot_training_metrics.py \
--results-csv workspace/detectors/pano-yolo-merged-cpu/results.csv \
--output-dir workspace/detectors/pano-yolo-merged-cpu- Benchmark ONNX/pt weights:
PYTHONPATH=$(pwd) python scripts/benchmark_detector.py \
--weights workspace/detectors/pano-yolo-merged-cpu/weights/best.onnx \
--images workspace/validation/pano-yolo-merged-cpu-test6/val_batch1_pred.jpgRun detection only (FDI numbering included):
PYTHONPATH=$(pwd) python -m src.pipeline.inference \
--images "data/via/*.jpg" \
--detector workspace/detectors/pano-yolo-merged-cpu/weights/best.onnx \
--skip-classifier \
--output-json workspace/validation/detections_only.jsonAdd --classifier path/to/classifier.pt (drop the skip flag) to enable status prediction and probability dumps.
- Best box epoch (71): box mAP50 0.959, mAP50-95 0.674; mask mAP50 0.947, mAP50-95 0.573 (
workspace/detectors/pano-yolo-merged-cpu/results.csv). - Final epoch (80) masks: precision 0.943, recall 0.913, mAP50 0.938.
- Visual artifacts live in
workspace/detectors/pano-yolo-merged-cpu/*.pngandworkspace/validation/pano-yolo-merged-cpu-test6/*.jpg.
Sample predictions:
- Temporary caches and older runs were removed; regenerate splits via the prep scripts if needed.
- Keep
workspace/detectors/pano-yolo-merged-cpu/andworkspace/validation/to preserve the referenced history. workspace/validation/detections_only.jsonstill references formerdataset/paths; adjust if replaying it.





