Releases: roboflow/rf-detr
RF-DETR 1.6.4: custom pretrain_weights
🌱 Changed
-
Class names on predictions.
predict()now includesclass_namein the returneddetections.datadict, mapping each detection's 0-indexed class ID to its human-readable name. No more manual lookups. (#914)model = RFDETRSmall(pretrain_weights="path/to/fine_tuned.pth") detections = model.predict("image.jpg", threshold=0.5) print(detections.data["class_name"]) # ["cat", "dog", "cat"]
🔧 Fixed
-
Fixed segmentation training crashing on multi-GPU DDP setups. The segmentation head leaves some parameters unused on certain forward steps, which triggered
RuntimeError: parameters that were not used in producing the loss.build_trainer()now automatically enablesfind_unused_parameters=Truewhensegmentation_head=True. (#947) -
Fixed fused AdamW optimizer crash during FP32 multi-GPU training. On Ampere+ GPUs, fused AdamW was enabled whenever the hardware supported BF16 — even when the trainer was explicitly configured for
precision="32-true". This caused a dtype mismatch in DDP gradient buckets. The optimizer now checks the trainer's actual precision setting, not just GPU capability. (#947) -
Fixed multi-GPU DDP training failing in Jupyter notebooks and Kaggle. Fork-based DDP corrupted PyTorch's OpenMP thread pool, causing
SIGABRTon the second process. RF-DETR now uses a spawn-based DDP strategy in interactive environments, avoiding the thread pool issue entirely. (#928) -
Fixed
RFDETR.train(resolution=...)being silently ignored. Theresolutionkwarg is a model-level setting, not a training config field, so it was quietly dropped. It is now applied to the model config before training begins, with validation that the value is divisible bypatch_size * num_windows. (#933)model = RFDETRSmall() model.train(dataset_dir="./dataset", resolution=768) # now works
-
Fixed
save_dataset_gridsbeing silently a no-op. The grid saver was never wired into the training loop. Dataset sample grids are now saved to{output_dir}/dataset_grids/when enabled. Grid save failures are caught and logged without interrupting training. (#946) -
Fixed partial gradient-accumulation windows at the end of training epochs. When the dataset length was not evenly divisible by
effective_batch_size * world_size, PyTorch Lightning would fire the optimizer on an incomplete accumulation window. The training dataset is now padded to an exact multiple, ensuring every optimizer step uses a full gradient window. (#937) -
Fixed
torch.export.exportfailing on the transformer decoder. Thespatial_shapes_hwparameter was not threaded through the decoder layers, breaking export for models using multi-scale deformable attention. (#936) -
Fixed
download_pretrain_weights()silently overwriting fine-tuned checkpoints. When a fine-tuned checkpoint shared a filename with a registry model (e.g.rf-detr-nano.pth), an MD5 mismatch would trigger a re-download that replaced the user's weights. The function now returns early when the file exists andredownload=False, emitting a warning instead. (#935)
🏆 Contributors
Welcome to our new contributors, and thank you to everyone who helped with this release:
- M. Fazri Nizar (@mfazrinizar) (LinkedIn) — multi-GPU DDP training in notebooks
- Jiahao Sun (@sjhddh) (LinkedIn) — config type hint fix
- Jirka Borovec (@Borda) (LinkedIn) — release coordination, reviews
Automated contributions: @copilot-swe-agent[bot], @pre-commit-ci[bot]
Full changelog: 1.6.3...1.6.4
RF-DETR 1.6.3: auto-detects num_classes
🌱 Changed
-
predict()returns source image and shape on detections. Returnedsv.Detectionsobjects now includedetections.data["source_image"](the original image as a NumPy array) anddetections.data["source_shape"](a(height, width)tuple), so you can annotate results without loading the image separately. (#892)detections = model.predict("https://media.roboflow.com/dog.jpg", threshold=0.5) annotated = sv.BoxAnnotator().annotate(detections.data["source_image"], detections)
-
RFDETR.train()auto-detectsnum_classesfrom the dataset. Whennum_classesis not explicitly set, RF-DETR reads the class count from the dataset directory and reinitializes the detection head automatically. A warning is emitted when your configured value differs from the dataset count. (#893)model = RFDETRSmall() model.train(dataset_dir="./dataset") # num_classes inferred from dataset
-
optimize_for_inference()accepts dtype as a string. Pass"float16"or"bfloat16"in addition totorch.float16; invalid inputs now raiseTypeErroruniformly. (#899)
🔧 Fixed
- Fixed fine-tuned models exporting wrong class counts to ONNX:
reinitialize_detection_headnow replacesnn.Linearmodules instead of mutating tensor data in-place, keepingout_featuresconsistent with the actual weight shape after fine-tuning. (#904) - Fixed
optimize_for_inference()leaking a CUDA context on multi-GPU setups — deep-copy, export, and JIT-trace now run inside the correct device context. Also fixed: state is rolled back cleanly if optimization fails mid-way, and temp download files now use unique per-process paths to prevent parallel worker collisions. (#899) - Fixed
deploy_to_roboflowraisingFileNotFoundErrorafter PyTorch Lightning migration —class_names.txtis now written to the upload directory andargs.class_namesis populated before saving the checkpoint, restoring uploads for all model types including segmentation. (#890)
🏆 Contributors
Welcome to our new contributors, and thank you to everyone who helped with this release:
- Md Faruk Alam (@farukalamai) (LinkedIn) — predict source image and shape
- Jirka Borovec (@Borda) (LinkedIn) — release coordination, reviews
Automated contributions: @copilot-swe-agent[bot], @pre-commit-ci[bot]
Full changelog: 1.6.2...1.6.3
RF-DETR 1.6.2: Predict shapes
🚀 Added
-
RFDETR.predict(shape=...)— pass an explicit(height, width)tuple to run inference at a non-square resolution, matching the resolution used when exporting the model. Both dimensions must be positive integers divisible by 14. (#866)detections = model.predict("image.jpg", shape=(480, 640))
🌱 Changed
-
ModelConfig.deviceandRFDETR.train(device=...)now accepttorch.deviceobjects and indexed device strings ("cuda:0","cuda:1"). Existing string values ("cpu","cuda") are unchanged.RFDETR.train()warns when a valid but unmapped device type is passed to PyTorch Lightning auto-detection. (#872)from rfdetr import RFDETRSmall from torch import device model = RFDETRSmall(...) model.train(..., device=device("cuda:1")) model.train(..., device="cuda:0")
🔧 Fixed
- Fixed ONNX export ignoring an explicit
patch_sizeargument:export()andpredict()now resolvepatch_sizefrommodel_configby default, validate it strictly (must be a positive integer, not bool), and enforce that(H, W)dimensions are divisible bypatch_size × num_windows. (#876) - Fixed ONNX export for models traced with dynamic batch dimensions —
torch.fullis now used for Python-int spatial dims to avoidH_.expand(N_)tracer failures. (#871)
🏆 Contributors
Welcome to our new contributors, and thank you to everyone who helped with this release:
- zhaoshuo (@zhaoshuo1223) — ONNX export shape validation and patch_size fixes
- Sven Goluza (@svengoluza) — ONNX export dynamic batch fix
- Jirka Borovec (@Borda) (LinkedIn) — shape inference, torch.device support, release coordination
Full changelog: 1.6.1...1.6.2
RF-DETR 1.6.1: Resolved checkpointing
🗑️ Deprecated
- ONNX export simplification removed.
RFDETR.export(..., simplify=..., force=...)— both arguments are now no-ops and emit aDeprecationWarning. RF-DETR no longer runs ONNX simplification automatically; remove these arguments from your calls. They will be removed in v1.8. (#861)
🔧 Fixed
- Fixed
RFDETR.train(): a missingrfdetr[train]install (e.g. plainpip install rfdetrin Colab) now raises anImportErrorwith an actionable message —pip install "rfdetr[train,loggers]"— instead of a rawModuleNotFoundErrorwith no install hint. (#858) - Fixed
AUG_AGGRESSIVEpreset:translate_percentwas(0.1, 0.1)— a degenerate range that forced AlbumentationsAffineto always translate right/down by exactly 10%. Corrected to(-0.1, 0.1)for symmetric bidirectional translation. (#863) - Fixed PTL training path:
latest.ckptand per-interval checkpoints (checkpoint_interval_N.ckpt) are now properly written and restored on resume. (#847) - Fixed
BestModelCallbackand checkpoint monitor raisingMisconfigurationExceptionon non-eval epochs wheneval_interval > 1— monitor key absence is now handled gracefully. (#848) - Fixed
protobufversion constraint in theloggersextra to guard against TensorBoard descriptor crash (TypeError: Descriptors cannot be created directly) with protobuf ≥ 4. (#846) - Fixed duplicate
ModelCheckpointstate keys whencheckpoint_interval=1;last.ckptis omitted in that configuration to avoid collision. (#859)
🏆 Contributors
Thank you to everyone who helped with this release:
- Jirka Borovec (@Borda) (LinkedIn) – Patch fixes: PTL checkpoint restore, BestModelCallback crash, duplicate checkpoint keys, Colab ImportError, protobuf pin, AUG_AGGRESSIVE translation, ONNX simplify deprecation
Full changelog: 1.6.0...1.6.1
RF-DETR 1.6.0: Composable Lightning Training
🚀 Added
-
Composable PyTorch Lightning training building blocks. The training stack is now built on PyTorch Lightning and exposed as modular, swap-in pieces — like Lego. Use the familiar one-liner if that's all you need, or snap the blocks together yourself for full control: custom callbacks, multi-GPU strategies, YAML config files, and programmatic trainer construction. (#757, #794, closes #709)
Level 1 — same API as always:
from rfdetr import RFDETRSmall model = RFDETRSmall() model.train(dataset_dir="path/to/dataset", epochs=50)
Level 2 — assemble your own training from building blocks:
from rfdetr import RFDETRModelModule, RFDETRDataModule, build_trainer from rfdetr.training import RFDETREMACallback, COCOEvalCallback, BestModelCallback from pytorch_lightning import Trainer # Each block is a standard PTL component — swap, subclass, or extend any piece module = RFDETRModelModule(model_config=..., train_config=...) datamodule = RFDETRDataModule(dataset_dir="path/to/dataset", train_config=...) # build_trainer() wires up all RF-DETR callbacks for you ... trainer = build_trainer(train_config=...) # ... or compose your own from individual callbacks trainer = Trainer( max_epochs=50, callbacks=[ RFDETREMACallback(decay=0.9998), # exponential moving average COCOEvalCallback(), # COCO mAP evaluation BestModelCallback(), # save best checkpoint # ... add your own Lightning callbacks here ], ) trainer.fit(module, datamodule)
Level 3 — YAML config + CLI, zero Python required:
# configs/rfdetr-base.yaml model: class_path: rfdetr.RFDETRSmall trainer: max_epochs: 50 precision: "16-mixed" devices: 4 # 4-GPU DDP, no code changes
rfdetr fit --config configs/rfdetr-base.yaml
-
Multi-GPU DDP via
model.train(). Passstrategy,devices, andnum_nodesdirectly to the familiar one-liner — no custom trainer required. Single-GPU behaviour is unchanged when these are omitted. (#808, closes #803)model.train( dataset_dir="path/to/dataset", epochs=50, strategy="ddp", devices=4, )
-
batch_size='auto'for automatic batch size discovery. RF-DETR runs a lightweight CUDA memory probe before training starts to find the largest safe micro-batch size, then recommendsgrad_accum_stepsto hit a configurable effective batch size target (default 16). The resolved values are logged so you always know what was used. (#814)model.train( dataset_dir="path/to/dataset", batch_size="auto", auto_batch_target_effective=16, # optional, default 16 ) # Logs: "safe micro-batch = 3, grad_accum_steps = 4, effective_batch_size = 12"
-
Segmentation support in the synthetic dataset generator.
generate_coco_dataset(with_segmentation=True)produces COCO-format polygon annotations alongside bounding boxes, enabling end-to-end segmentation fine-tuning with fully synthetic data. (#781) -
set_attn_implementationon DINOv2 backbone. Switch between"eager"and"sdpa"attention implementations at runtime without re-initialising the model. (#760) -
ModelContextis now a public API._ModelContexthas been promoted toModelContextand exported fromrfdetr. Usemodel.contextto inspectclass_names,num_classes, and related metadata after training or loading a checkpoint. (#835)model = RFDETRSmall() model.train(dataset_dir="path/to/dataset", epochs=10) print(model.context.class_names) # ['cat', 'dog', ...] print(model.context.num_classes) # 2
-
backbone_loraandfreeze_encoderinModelConfig. Both fine-tuning control flags are now first-class fields inModelConfig, letting you configure them through the public API or YAML config. (#829) -
eval_max_dets,eval_interval, andlog_per_class_metricspromoted toTrainConfigfields for explicit control over COCO evaluation behaviour. -
python -m rfdetrentry point. The CLI is now invokable aspython -m rfdetr, in addition to therfdetrconsole script. -
py.typedmarker added — RF-DETR is now PEP 561–compliant; type checkers will discover inline type hints automatically.
⚠️ Breaking Changes
-
transformers>=5.1.0 now required. The DINOv2 windowed-attention backbone uses the transformers v5 API. Projects pinned to transformers v4 must either upgrade or pinrfdetr<1.6.0. (#760, closes #730) -
draw_synthetic_shapereturn type changed. The function now returnsTuple[np.ndarray, List[float]]—(image, polygon)— instead of justnp.ndarray. Update any call site that unpacks only the image. (#781)# Before img = draw_synthetic_shape(canvas, shape, color) # After img, polygon = draw_synthetic_shape(canvas, shape, color)
-
Optional extras renamed. The PyPI install extras have been renamed for clarity:
Before After rfdetr[metrics]rfdetr[loggers]rfdetr[onnxexport]rfdetr[onnx]
🗑️ Deprecated
-
rfdetr.deploy— this internal module now redirects torfdetr.exportwith aDeprecationWarning. The user-facingmodel.export()API is unchanged. If you import directly fromrfdetr.deploy.*, migrate torfdetr.export.*before v1.7. -
rfdetr.util.*— redirects torfdetr.utilities.*with aDeprecationWarning. Migrate at your convenience before v1.7.
🌱 Changed
-
Albumentations 1.x and 2.x both supported. The version constraint is now
albumentations>=1.4.24,<3.0.0. Configs using the oldheight/widthkeyword arguments are automatically adapted to the 2.xsize=(height, width)API. (#786, closes #779) -
Current learning rate shown in the training progress bar. The live progress bar now displays the active learning rate alongside loss so you can see scheduler changes in real time. (#809, closes #804)
-
Faster
import rfdetrstartup.supervision,pytorch_lightning, and several other heavy dependencies are no longer imported at module load time — they are loaded on first use instead. Cold-import time drops measurably in inference-only environments. (#801)
🔧 Fixed
-
Fixed checkpoint loading into a model with a different architecture (segmentation vs. detection, or
patch_sizemismatch) — RF-DETR now raises a descriptiveValueErrorwith actionable guidance beforeload_state_dictever fires, replacing a cryptic tensor-sizeRuntimeError. (#810, closes #806) -
Fixed
class_namesnot reflecting dataset labels onmodel.predict()after training — class names are now synced from the dataset at the end of training so inference always uses the correct label list. (#816) -
Fixed detection head reinitialization incorrectly overwriting fine-tuned weights when loading a checkpoint with fewer classes than the model default. The second
reinitialize_detection_headcall now only fires in the backbone-pretrain scenario. (#815, closes #813, #509) -
Fixed
grid_sampleand bicubic interpolation silently falling back to CPU on Apple Silicon (MPS) — both operations now run natively on MPS via a custom implementation, restoring full GPU utilisation on Mac. (#821) -
Fixed
early_stopping=FalseinTrainConfigbeing silently ignored — the setting now propagates correctly and training runs to completion when disabled. (#835) -
Fixed
ValueError: matrix entries are not finitecrash inHungarianMatcherwhen the cost matrix containsNaNorInfvalues — non-finite entries are now replaced with a large finite sentinel before Hungarian assignment, and a warning is emitted at most once per matcher instance. (#787, closes #784) -
Fixed YOLO dataset validation rejecting
data.yml— both.yamland.ymlextensions are now accepted. (#777, closes #775) -
Fixed degenerate bounding boxes (zero width or height) causing
ValueErrorin Albumentations validation — they are now silently dropped before the transform pipeline runs. (#825)
🏆 Contributors
A special welcome to our new contributors and a big thank you to everyone who helped with this release:
- Haocheng Lu (@HaochengLu) – Automatic batch size discovery (
batch_size='auto') - Omkar Kabde (@omkar-334) (LinkedIn) – Transformers v5 migration for the DINOv2 backbone
- Jirka Borovec (@Borda) (LinkedIn) – PyTorch Lightning migration, DDP support, MPS fixes, Albumentations 2.x support, HungarianMatcher fix, deferred imports, package restructure
Full Changelog: 1.5.2...v1.6.0
RF-DETR 1.5.2: show GPU memory
🚀 Added
- Peak GPU memory in progress bars. Training and evaluation tqdm bars now display
max_mem(in MB) when running on CUDA, making it easy to track hardware utilisation without a separate profiling tool. The metric is device-aware and is omitted on CPU and MPS runs. (#773)
🔧 Fixed
- Fixed
aug_configbeing silently ignored when training on YOLO-format datasets —build_roboflow_from_yolonever forwarded the value, so transforms always fell back to the defaultAUG_CONFIGregardless of what was configured. (#774) - Fixed segmentation evaluation metrics not being written to
results_mask.jsonduring the validation phase. The file now has the same structure asresults.jsonand is updated after both validation and test runs. (#772) - Fixed
AttributeErrorcrash inupdate_drop_pathwhen the DinoV2 backbone layer structure does not match any known pattern._get_backbone_encoder_layersnow returnsNonefor unrecognised architectures andupdate_drop_pathexits early instead of raising. (#762) - Fixed
drop_path_ratenot being forwarded to the DinoV2 model configuration, meaning stochastic depth was never actually applied even when explicitly set. A warning is now emitted whendrop_path_rate > 0.0is used with a non-windowed backbone where it has no effect. (#762) - Fixed incorrect COCO hierarchy filtering logic that caused parent categories to be excluded from the class list when they should have been retained. (#759)
- Fixed evaluation metric corruption on 1-indexed Roboflow datasets caused by a flawed contiguity check in
_should_use_raw_category_ids— the old heuristic inspected per-batch labels and could pick the wrong resolution path depending on which labels happened to appear first. (#755)
🏆 Contributors
A special welcome to our new contributors and a big thank you to everyone who helped with this release:
- Samuel Lima (@samuellimabraz) – Fix drop path in DinoV2 backbone
- youthfrost (@youthfrost) – Fix segmentation results_mask.json saving
- Jelle R. Dalenberg (@jrdalenberg) (LinkedIn) – Fix COCO hierarchy filtering
- Abdul Mukit (@Abdul-Mukit) (LinkedIn) – Fix category contiguity and evaluation metric corruption on 1-indexed datasets
- Jirka Borovec (@Borda) (LinkedIn) – Fix aug_config in YOLO dataset builder, max_mem telemetry, CI/testing infrastructure
Full Changelog: 1.5.1...1.5.2
RF-DETR 1.5.1: Nested transforms
🚀 Added
-
Nested Albumentations transforms.
OneOfandSequentialcontainers now work correctly inside the augmentation pipeline. Probability settings on container transforms are ignored — they always fire, keeping composition predictable. Inference pipelines can also passNonetargets so the same transform object works for both training and inference. (#752)from rfdetr import RFDETRSmall model = RFDETRSmall() model.train( dataset_dir="...", aug_config={ "OneOf": [ {"RandomBrightnessContrast": {"p": 0.5}}, {"HueSaturationValue": {"p": 0.5}}, ], "HorizontalFlip": {"p": 0.5}, }, )
🌱 Changed
- Dataset transform pipeline now uses torchvision-native
Compose,ToImage, andToDtypeinstead of custom implementations.Normalizedefaults to ImageNet mean/std. (#745)
🔧 Fixed
- Fixed
RFDETRMediummissing from the public API —__all__contained a duplicateRFDETRSmallentry instead. (#748) - Fixed
AR50_90reporting an incorrect value inMetricsMLFlowSinkdue to a wrong COCO evaluation index. (#735) - Fixed supercategory filtering in
_load_classesfor COCO datasets with flat or mixed supercategory structures. (#744) - Fixed a crash in geometric transforms (flip, crop, etc.) when a sample contains zero-area / empty masks. (#727)
- Fixed segmentation training on Colab —
DepthwiseConvBlocknow disables cuDNN for depthwise separable convolutions. (#728) - Locked
onnxsimto<0.6.0to preventpip installfrom hanging indefinitely. (#749)
🏆 Contributors
A special welcome to our new contributors and a big thank you to everyone who helped with this release:
- tillfri (@tillfri) – Fix AR50_90 metric index in MLflow sink
- justin-alt-account (@justin-alt-account) – Fix
RFDETRMediummissing from__all__ - Jirka Borovec (@Borda) (LinkedIn) – Nested Albumentations support, transform pipeline refactor, mask fix, supercategory fix, onnxsim pin, CI/testing infrastructure
Full Changelog: 1.5.0...1.5.1
RF-DETR 1.5.0: Custom augmentations
🚀 Added
-
Custom augmentations via Albumentations. You can now control training augmentations through the
aug_configparameter intrain(). Pass a dictionary of Albumentations transforms, choose a built-in named preset, or disable augmentations entirely. Bounding boxes and segmentation masks are automatically transformed alongside images. (#263, #702)from rfdetr import RFDETRSmall from rfdetr.datasets.aug_config import AUG_CONSERVATIVE, AUG_AGGRESSIVE, AUG_AERIAL, AUG_INDUSTRIAL model = RFDETRSmall() # Use a built-in preset model.train(dataset_dir="...", aug_config=AUG_AGGRESSIVE, progress_bar=True) # Or define transforms explicitly model.train( dataset_dir="...", aug_config={ "HorizontalFlip": {"p": 0.5}, "RandomBrightnessContrast": {"brightness_limit": 0.2, "p": 0.4}, "GaussianBlur": {"blur_limit": 3, "p": 0.2}, }, progress_bar=True, ) # Disable all augmentations model.train(dataset_dir="...", aug_config={})
Preset Best for AUG_CONSERVATIVESmall datasets (under 500 images) AUG_AGGRESSIVELarge datasets (2000+ images) AUG_AERIALSatellite / overhead imagery AUG_INDUSTRIALManufacturing / inspection data -
Save augmented training image samples. Enable
save_dataset_grids=TrueinTrainConfigto write 3×3 JPEG grids of augmented training and validation images to your output directory before training begins, making it easy to verify your augmentation pipeline without running a full epoch. (#153)from rfdetr import RFDETRSmall model = RFDETRSmall() model.train(dataset_dir="...", save_dataset_grids=True, output_dir="output/") # Grids are saved to output/: # train_batch0_grid.jpg, train_batch1_grid.jpg, train_batch2_grid.jpg # val_batch0_grid.jpg, val_batch1_grid.jpg, val_batch2_grid.jpg
-
ClearML training logger. Set
clearml=TrueinTrainConfigto stream per-epoch metrics directly to your ClearML project. (#520)from rfdetr import RFDETRSmall model = RFDETRSmall() model.train(dataset_dir="...", clearml=True)
-
MLflow training logger. Set
mlflow=TrueinTrainConfigto log runs and metrics to MLflow, with support for custom tracking URIs and system metrics. (#109)from rfdetr import RFDETRSmall model = RFDETRSmall() model.train(dataset_dir="...", mlflow=True)
-
Progress bar for training and validation. A live progress bar now shows batch-level progress during training and validation, and on-screen logs are structured for easier reading. (#204)
-
devicefield added toTrainConfig, allowing explicit device selection when configuring training programmatically. (#687) -
ModelConfignow raises an error on unknown parameters, preventing silent misconfiguration from typos or stale config keys. (#196) -
TensorRT export guide. New documentation section covering how to convert an exported ONNX model to a TensorRT engine for maximum inference throughput. (#175)
🌱 Changed
OPEN_SOURCE_MODELSconstant deprecated in favour of theModelWeightsenum for cleaner model weight references. (#696)- Added MD5 checksum validation for pretrained weight downloads, preventing silent use of corrupted files. (#679)
🔧 Fixed
- Fixed Albumentations bool-mask crash that occurred during segmentation training. (#706)
- Fixed
UnboundLocalErrorwhen resuming training from a completed checkpoint. (#707) - Prevented corruption of
checkpoint_best_total.pthvia atomic checkpoint stripping. (#708) - Fixed PyTorch 2.9+ compatibility issue with CUDA capability detection. (#686)
- Fixed dtype mismatch error when
use_position_supervised_loss=True. (#447) - Fixed inconsistent return values from
build_model. (#519) - Fixed
positional_encoding_sizetype annotation frombooltoint. (#524) - Fixed ONNX export
output_namesto include masks when exporting segmentation models. (#402) - Fixed
num_selectnot being correctly updated during segmentation model fine-tuning. (#399) - Fixed
np.argwhere→np.argmaxmisuse. (#536) - Fixed COCO sparse category ID remapping logic for non-contiguous or offset category IDs are correctly handled. (#712)
- Fixed segmentation mask filtering when using aggressive augmentations. (#717)
🏆 Contributors
A special welcome to our new contributors and a big thank you to everyone who helped with this release:
- Panagiotis Moraitis (@panagiotamoraiti) (LinkedIn) – Custom Albumentations augmentation wrapper
- Shubham Rajvanshi (@shubsraj) (LinkedIn) – Progress bar and structured training logs
- Clement (@CorporalCleg) – ClearML logger integration
- Lakshman (@lab176344) – MLflow logger integration
- Mattia Di Giusto (@picjul) (LinkedIn) – Save augmented training image samples
- Juan Cobos (@juan-cobos) –
devicefield inTrainConfig - Ahmed Samir (@Ahmed-Samir11) – Error on unknown
ModelConfigparameters - Dominik Baran (@Yozer) (LinkedIn) – Fix segmentation mask filtering with aggressive augmentations
- Sungchul Kim (@sungchul2) (LinkedIn) – Fix
num_selectduring segmentation fine-tuning - Abdul Mukit (@Abdul-Mukit) (LinkedIn) – Fix ONNX export output names for segmentation
- Alarmod (@Alarmod) – PyTorch 2.9+ compatibility fix
- lixiaolei1982 (@lixiaolei1982) – Fix
build_modelreturn values &positional_encoding_sizetype - kawabe-jiw (@kawabe-jiw) – Fix dtype mismatch with
use_position_supervised_loss=True - Andrei Moraru (@AndreiMoraru123) (LinkedIn) –
np.argwhere→np.argmaxfix - Niels Teunissen (@DatSplit) – TensorRT export documentation
- stop1one (@stop1one) (LinkedIn) – Stabilize distributed training & test reliability
- Jirka Borovec (@Borda) (LinkedIn) – Augmentation presets, MD5 weight validation,
ModelWeightsenum, CI/testing infrastructure, docs
RF-DETR 1.4.3
🐞 Fixed
- Export: Fix
deploy_to_roboflowseg model export (#578)
🛠️ Changed / Maintenance
- Validation: Add MD5 validation for file downloads and pretrained weights handling (#679)
- Testing & Benchmarking:
- Docs: Update license section in README.md
🏆 Contributors
A special welcome to our new contributors and a big thank you to everyone who helped with this release:
- Francesco Bodria (@francescobodria) (LinkedIn) – COCO export fixes
- Matvei Popov (@Matvezy) (LinkedIn) – Segmentation export fixes
- Jirka Borovec (@Borda) (LinkedIn) – MD5 validation, benchmarks, and maintenance
Full Changelog: 1.4.2...1.4.3
RF-DETR 1.4.2
🚀 Added
- YOLO Support: Update YOLO dataset format support (#74)
- Inference: Support for image URLs in prediction (#629)
- Training:
🐞 Fixed
- CLI: Fix error in CLI script (#246)
- Export: Fix RFDETR-Seg ONNX Export failing (#626)
- Training/Validation:
- Dependencies: Hotfix: pin
transformersdependency to version range >4.0.0, <5.0.0 (#599)
🛠️ Changed / Maintenance
- Synthetic Data: Add synthetic dataset generation module and corresponding tests (#617)
- Benchmarking:
- Developer Experience:
- Replace
printstatements with logger calls and remove unused imports (#158)
- Replace
- Refactoring:
- Separate platform sub-module and migrate to
rfdetr_pluspackage (#645)
- Separate platform sub-module and migrate to
- Testing & CI:
- Stabilize training tests and centralize seed logic with
seed_all(#655)
- Stabilize training tests and centralize seed logic with
- Docs Formatting: Standardize documentation formatting and enable
codespell/mdformathooks (#634, #635, #637)
🏆 Contributors
A special welcome to our new contributors and a big thank you to everyone who helped with this release:
- Jirka Borovec (@Borda) (LinkedIn) – Synthetic data generation & COCO benchmarking
- Piotr Skalski (@SkalskiP) (LinkedIn) – Logger refactoring & cleanup
- Omkar Kabde (@omkar-334) (LinkedIn) – Type annotations & custom print arguments
- Damiano Ferrari (@ferraridamiano) (LinkedIn) – Python version management
- Panagiotis Moraitis (@panagiotamoraiti) (LinkedIn) – Fixing misleading warning messages
- Mario De Genaro (@mario-dg) (LinkedIn) – YOLO dataset format updates
- Tahar H. (@taharh) (LinkedIn) – RFDETR-Seg ONNX Export fixes
- Hardik Dava (@hardikdava) – Support for image URLs in predictions
- Alex Holliday (@AHolliday) – Custom datasets without test splits
- Surya (@surya3214) – CLI script bug fixes
- Y. Yang (@y-yang42) – Windowed attention mechanism fix
Full Changelog: 1.4.1...1.4.2