Skip to content

Releases: roboflow/rf-detr

RF-DETR 1.6.4: custom pretrain_weights

10 Apr 11:26

Choose a tag to compare

🌱 Changed

  • Class names on predictions. predict() now includes class_name in the returned detections.data dict, mapping each detection's 0-indexed class ID to its human-readable name. No more manual lookups. (#914)

    model = RFDETRSmall(pretrain_weights="path/to/fine_tuned.pth")
    detections = model.predict("image.jpg", threshold=0.5)
    print(detections.data["class_name"])  # ["cat", "dog", "cat"]

🔧 Fixed

  • Fixed segmentation training crashing on multi-GPU DDP setups. The segmentation head leaves some parameters unused on certain forward steps, which triggered RuntimeError: parameters that were not used in producing the loss. build_trainer() now automatically enables find_unused_parameters=True when segmentation_head=True. (#947)

  • Fixed fused AdamW optimizer crash during FP32 multi-GPU training. On Ampere+ GPUs, fused AdamW was enabled whenever the hardware supported BF16 — even when the trainer was explicitly configured for precision="32-true". This caused a dtype mismatch in DDP gradient buckets. The optimizer now checks the trainer's actual precision setting, not just GPU capability. (#947)

  • Fixed multi-GPU DDP training failing in Jupyter notebooks and Kaggle. Fork-based DDP corrupted PyTorch's OpenMP thread pool, causing SIGABRT on the second process. RF-DETR now uses a spawn-based DDP strategy in interactive environments, avoiding the thread pool issue entirely. (#928)

  • Fixed RFDETR.train(resolution=...) being silently ignored. The resolution kwarg is a model-level setting, not a training config field, so it was quietly dropped. It is now applied to the model config before training begins, with validation that the value is divisible by patch_size * num_windows. (#933)

    model = RFDETRSmall()
    model.train(dataset_dir="./dataset", resolution=768)  # now works
  • Fixed save_dataset_grids being silently a no-op. The grid saver was never wired into the training loop. Dataset sample grids are now saved to {output_dir}/dataset_grids/ when enabled. Grid save failures are caught and logged without interrupting training. (#946)

  • Fixed partial gradient-accumulation windows at the end of training epochs. When the dataset length was not evenly divisible by effective_batch_size * world_size, PyTorch Lightning would fire the optimizer on an incomplete accumulation window. The training dataset is now padded to an exact multiple, ensuring every optimizer step uses a full gradient window. (#937)

  • Fixed torch.export.export failing on the transformer decoder. The spatial_shapes_hw parameter was not threaded through the decoder layers, breaking export for models using multi-scale deformable attention. (#936)

  • Fixed download_pretrain_weights() silently overwriting fine-tuned checkpoints. When a fine-tuned checkpoint shared a filename with a registry model (e.g. rf-detr-nano.pth), an MD5 mismatch would trigger a re-download that replaced the user's weights. The function now returns early when the file exists and redownload=False, emitting a warning instead. (#935)


🏆 Contributors

Welcome to our new contributors, and thank you to everyone who helped with this release:

Automated contributions: @copilot-swe-agent[bot], @pre-commit-ci[bot]


Full changelog: 1.6.3...1.6.4

RF-DETR 1.6.3: auto-detects num_classes

02 Apr 14:04

Choose a tag to compare

🌱 Changed

  • predict() returns source image and shape on detections. Returned sv.Detections objects now include detections.data["source_image"] (the original image as a NumPy array) and detections.data["source_shape"] (a (height, width) tuple), so you can annotate results without loading the image separately. (#892)

    detections = model.predict("https://media.roboflow.com/dog.jpg", threshold=0.5)
    annotated = sv.BoxAnnotator().annotate(detections.data["source_image"], detections)
  • RFDETR.train() auto-detects num_classes from the dataset. When num_classes is not explicitly set, RF-DETR reads the class count from the dataset directory and reinitializes the detection head automatically. A warning is emitted when your configured value differs from the dataset count. (#893)

    model = RFDETRSmall()
    model.train(dataset_dir="./dataset")  # num_classes inferred from dataset
  • optimize_for_inference() accepts dtype as a string. Pass "float16" or "bfloat16" in addition to torch.float16; invalid inputs now raise TypeError uniformly. (#899)

🔧 Fixed

  • Fixed fine-tuned models exporting wrong class counts to ONNX: reinitialize_detection_head now replaces nn.Linear modules instead of mutating tensor data in-place, keeping out_features consistent with the actual weight shape after fine-tuning. (#904)
  • Fixed optimize_for_inference() leaking a CUDA context on multi-GPU setups — deep-copy, export, and JIT-trace now run inside the correct device context. Also fixed: state is rolled back cleanly if optimization fails mid-way, and temp download files now use unique per-process paths to prevent parallel worker collisions. (#899)
  • Fixed deploy_to_roboflow raising FileNotFoundError after PyTorch Lightning migration — class_names.txt is now written to the upload directory and args.class_names is populated before saving the checkpoint, restoring uploads for all model types including segmentation. (#890)

🏆 Contributors

Welcome to our new contributors, and thank you to everyone who helped with this release:

Automated contributions: @copilot-swe-agent[bot], @pre-commit-ci[bot]


Full changelog: 1.6.2...1.6.3

RF-DETR 1.6.2: Predict shapes

27 Mar 16:33

Choose a tag to compare

🚀 Added

  • RFDETR.predict(shape=...) — pass an explicit (height, width) tuple to run inference at a non-square resolution, matching the resolution used when exporting the model. Both dimensions must be positive integers divisible by 14. (#866)

     detections = model.predict("image.jpg", shape=(480, 640))

🌱 Changed

  • ModelConfig.device and RFDETR.train(device=...) now accept torch.device objects and indexed device strings ("cuda:0", "cuda:1"). Existing string values ("cpu", "cuda") are unchanged. RFDETR.train() warns when a valid but unmapped device type is passed to PyTorch Lightning auto-detection. (#872)

     from rfdetr import RFDETRSmall
     from torch import device
     
     model = RFDETRSmall(...)
     
     model.train(..., device=device("cuda:1"))
     model.train(..., device="cuda:0")

🔧 Fixed

  • Fixed ONNX export ignoring an explicit patch_size argument: export() and predict() now resolve patch_size from model_config by default, validate it strictly (must be a positive integer, not bool), and enforce that (H, W) dimensions are divisible by patch_size × num_windows. (#876)
  • Fixed ONNX export for models traced with dynamic batch dimensions — torch.full is now used for Python-int spatial dims to avoid H_.expand(N_) tracer failures. (#871)

🏆 Contributors

Welcome to our new contributors, and thank you to everyone who helped with this release:

  • zhaoshuo (@zhaoshuo1223) — ONNX export shape validation and patch_size fixes
  • Sven Goluza (@svengoluza) — ONNX export dynamic batch fix
  • Jirka Borovec (@Borda) (LinkedIn) — shape inference, torch.device support, release coordination

Full changelog: 1.6.1...1.6.2

RF-DETR 1.6.1: Resolved checkpointing

25 Mar 13:51

Choose a tag to compare

🗑️ Deprecated

  • ONNX export simplification removed. RFDETR.export(..., simplify=..., force=...) — both arguments are now no-ops and emit a DeprecationWarning. RF-DETR no longer runs ONNX simplification automatically; remove these arguments from your calls. They will be removed in v1.8. (#861)

🔧 Fixed

  • Fixed RFDETR.train(): a missing rfdetr[train] install (e.g. plain pip install rfdetr in Colab) now raises an ImportError with an actionable message — pip install "rfdetr[train,loggers]" — instead of a raw ModuleNotFoundError with no install hint. (#858)
  • Fixed AUG_AGGRESSIVE preset: translate_percent was (0.1, 0.1) — a degenerate range that forced Albumentations Affine to always translate right/down by exactly 10%. Corrected to (-0.1, 0.1) for symmetric bidirectional translation. (#863)
  • Fixed PTL training path: latest.ckpt and per-interval checkpoints (checkpoint_interval_N.ckpt) are now properly written and restored on resume. (#847)
  • Fixed BestModelCallback and checkpoint monitor raising MisconfigurationException on non-eval epochs when eval_interval > 1 — monitor key absence is now handled gracefully. (#848)
  • Fixed protobuf version constraint in the loggers extra to guard against TensorBoard descriptor crash (TypeError: Descriptors cannot be created directly) with protobuf ≥ 4. (#846)
  • Fixed duplicate ModelCheckpoint state keys when checkpoint_interval=1; last.ckpt is omitted in that configuration to avoid collision. (#859)

🏆 Contributors

Thank you to everyone who helped with this release:

  • Jirka Borovec (@Borda) (LinkedIn) – Patch fixes: PTL checkpoint restore, BestModelCallback crash, duplicate checkpoint keys, Colab ImportError, protobuf pin, AUG_AGGRESSIVE translation, ONNX simplify deprecation

Full changelog: 1.6.0...1.6.1

RF-DETR 1.6.0: Composable Lightning Training

20 Mar 15:54

Choose a tag to compare

🚀 Added

  • Composable PyTorch Lightning training building blocks. The training stack is now built on PyTorch Lightning and exposed as modular, swap-in pieces — like Lego. Use the familiar one-liner if that's all you need, or snap the blocks together yourself for full control: custom callbacks, multi-GPU strategies, YAML config files, and programmatic trainer construction. (#757, #794, closes #709)

    Level 1 — same API as always:

     from rfdetr import RFDETRSmall
     
     model = RFDETRSmall()
     model.train(dataset_dir="path/to/dataset", epochs=50)

    Level 2 — assemble your own training from building blocks:

     from rfdetr import RFDETRModelModule, RFDETRDataModule, build_trainer
     from rfdetr.training import RFDETREMACallback, COCOEvalCallback, BestModelCallback
     from pytorch_lightning import Trainer
     
     # Each block is a standard PTL component — swap, subclass, or extend any piece
     module = RFDETRModelModule(model_config=..., train_config=...)
     datamodule = RFDETRDataModule(dataset_dir="path/to/dataset", train_config=...)
     
     # build_trainer() wires up all RF-DETR callbacks for you ...
     trainer = build_trainer(train_config=...)
     
     # ... or compose your own from individual callbacks
     trainer = Trainer(
         max_epochs=50,
         callbacks=[
             RFDETREMACallback(decay=0.9998),   # exponential moving average
             COCOEvalCallback(),                # COCO mAP evaluation
             BestModelCallback(),               # save best checkpoint
             # ... add your own Lightning callbacks here
         ],
     )
     
     trainer.fit(module, datamodule)

    Level 3 — YAML config + CLI, zero Python required:

     # configs/rfdetr-base.yaml
     model:
       class_path: rfdetr.RFDETRSmall
     trainer:
       max_epochs: 50
       precision: "16-mixed"
       devices: 4  # 4-GPU DDP, no code changes
     rfdetr fit --config configs/rfdetr-base.yaml
  • Multi-GPU DDP via model.train(). Pass strategy, devices, and num_nodes directly to the familiar one-liner — no custom trainer required. Single-GPU behaviour is unchanged when these are omitted. (#808, closes #803)

     model.train(
         dataset_dir="path/to/dataset",
         epochs=50,
         strategy="ddp",
         devices=4,
     )
  • batch_size='auto' for automatic batch size discovery. RF-DETR runs a lightweight CUDA memory probe before training starts to find the largest safe micro-batch size, then recommends grad_accum_steps to hit a configurable effective batch size target (default 16). The resolved values are logged so you always know what was used. (#814)

     model.train(
         dataset_dir="path/to/dataset",
         batch_size="auto",
         auto_batch_target_effective=16,  # optional, default 16
     )
     # Logs: "safe micro-batch = 3, grad_accum_steps = 4, effective_batch_size = 12"
  • Segmentation support in the synthetic dataset generator. generate_coco_dataset(with_segmentation=True) produces COCO-format polygon annotations alongside bounding boxes, enabling end-to-end segmentation fine-tuning with fully synthetic data. (#781)

  • set_attn_implementation on DINOv2 backbone. Switch between "eager" and "sdpa" attention implementations at runtime without re-initialising the model. (#760)

  • ModelContext is now a public API. _ModelContext has been promoted to ModelContext and exported from rfdetr. Use model.context to inspect class_names, num_classes, and related metadata after training or loading a checkpoint. (#835)

     model = RFDETRSmall()
     model.train(dataset_dir="path/to/dataset", epochs=10)
     
     print(model.context.class_names)   # ['cat', 'dog', ...]
     print(model.context.num_classes)   # 2
  • backbone_lora and freeze_encoder in ModelConfig. Both fine-tuning control flags are now first-class fields in ModelConfig, letting you configure them through the public API or YAML config. (#829)

  • eval_max_dets, eval_interval, and log_per_class_metrics promoted to TrainConfig fields for explicit control over COCO evaluation behaviour.

  • python -m rfdetr entry point. The CLI is now invokable as python -m rfdetr, in addition to the rfdetr console script.

  • py.typed marker added — RF-DETR is now PEP 561–compliant; type checkers will discover inline type hints automatically.

⚠️ Breaking Changes

  • transformers >=5.1.0 now required. The DINOv2 windowed-attention backbone uses the transformers v5 API. Projects pinned to transformers v4 must either upgrade or pin rfdetr<1.6.0. (#760, closes #730)

  • draw_synthetic_shape return type changed. The function now returns Tuple[np.ndarray, List[float]](image, polygon) — instead of just np.ndarray. Update any call site that unpacks only the image. (#781)

     # Before
     img = draw_synthetic_shape(canvas, shape, color)
     
     # After
     img, polygon = draw_synthetic_shape(canvas, shape, color)
  • Optional extras renamed. The PyPI install extras have been renamed for clarity:

    Before After
    rfdetr[metrics] rfdetr[loggers]
    rfdetr[onnxexport] rfdetr[onnx]

🗑️ Deprecated

  • rfdetr.deploy — this internal module now redirects to rfdetr.export with a DeprecationWarning. The user-facing model.export() API is unchanged. If you import directly from rfdetr.deploy.*, migrate to rfdetr.export.* before v1.7.

  • rfdetr.util.* — redirects to rfdetr.utilities.* with a DeprecationWarning. Migrate at your convenience before v1.7.

🌱 Changed

  • Albumentations 1.x and 2.x both supported. The version constraint is now albumentations>=1.4.24,<3.0.0. Configs using the old height/width keyword arguments are automatically adapted to the 2.x size=(height, width) API. (#786, closes #779)

  • Current learning rate shown in the training progress bar. The live progress bar now displays the active learning rate alongside loss so you can see scheduler changes in real time. (#809, closes #804)

  • Faster import rfdetr startup. supervision, pytorch_lightning, and several other heavy dependencies are no longer imported at module load time — they are loaded on first use instead. Cold-import time drops measurably in inference-only environments. (#801)

🔧 Fixed

  • Fixed checkpoint loading into a model with a different architecture (segmentation vs. detection, or patch_size mismatch) — RF-DETR now raises a descriptive ValueError with actionable guidance before load_state_dict ever fires, replacing a cryptic tensor-size RuntimeError. (#810, closes #806)

  • Fixed class_names not reflecting dataset labels on model.predict() after training — class names are now synced from the dataset at the end of training so inference always uses the correct label list. (#816)

  • Fixed detection head reinitialization incorrectly overwriting fine-tuned weights when loading a checkpoint with fewer classes than the model default. The second reinitialize_detection_head call now only fires in the backbone-pretrain scenario. (#815, closes #813, #509)

  • Fixed grid_sample and bicubic interpolation silently falling back to CPU on Apple Silicon (MPS) — both operations now run natively on MPS via a custom implementation, restoring full GPU utilisation on Mac. (#821)

  • Fixed early_stopping=False in TrainConfig being silently ignored — the setting now propagates correctly and training runs to completion when disabled. (#835)

  • Fixed ValueError: matrix entries are not finite crash in HungarianMatcher when the cost matrix contains NaN or Inf values — non-finite entries are now replaced with a large finite sentinel before Hungarian assignment, and a warning is emitted at most once per matcher instance. (#787, closes #784)

  • Fixed YOLO dataset validation rejecting data.yml — both .yaml and .yml extensions are now accepted. (#777, closes #775)

  • Fixed degenerate bounding boxes (zero width or height) causing ValueError in Albumentations validation — they are now silently dropped before the transform pipeline runs. (#825)


🏆 Contributors

A special welcome to our new contributors and a big thank you to everyone who helped with this release:

  • Haocheng Lu (@HaochengLu) – Automatic batch size discovery (batch_size='auto')
  • Omkar Kabde (@omkar-334) (LinkedIn) – Transformers v5 migration for the DINOv2 backbone
  • Jirka Borovec (@Borda) (LinkedIn) – PyTorch Lightning migration, DDP support, MPS fixes, Albumentations 2.x support, HungarianMatcher fix, deferred imports, package restructure

Full Changelog: 1.5.2...v1.6.0

RF-DETR 1.5.2: show GPU memory

04 Mar 11:46

Choose a tag to compare

🚀 Added

  • Peak GPU memory in progress bars. Training and evaluation tqdm bars now display max_mem (in MB) when running on CUDA, making it easy to track hardware utilisation without a separate profiling tool. The metric is device-aware and is omitted on CPU and MPS runs. (#773)

🔧 Fixed

  • Fixed aug_config being silently ignored when training on YOLO-format datasets — build_roboflow_from_yolo never forwarded the value, so transforms always fell back to the default AUG_CONFIG regardless of what was configured. (#774)
  • Fixed segmentation evaluation metrics not being written to results_mask.json during the validation phase. The file now has the same structure as results.json and is updated after both validation and test runs. (#772)
  • Fixed AttributeError crash in update_drop_path when the DinoV2 backbone layer structure does not match any known pattern. _get_backbone_encoder_layers now returns None for unrecognised architectures and update_drop_path exits early instead of raising. (#762)
  • Fixed drop_path_rate not being forwarded to the DinoV2 model configuration, meaning stochastic depth was never actually applied even when explicitly set. A warning is now emitted when drop_path_rate > 0.0 is used with a non-windowed backbone where it has no effect. (#762)
  • Fixed incorrect COCO hierarchy filtering logic that caused parent categories to be excluded from the class list when they should have been retained. (#759)
  • Fixed evaluation metric corruption on 1-indexed Roboflow datasets caused by a flawed contiguity check in _should_use_raw_category_ids — the old heuristic inspected per-batch labels and could pick the wrong resolution path depending on which labels happened to appear first. (#755)

🏆 Contributors

A special welcome to our new contributors and a big thank you to everyone who helped with this release:

  • Samuel Lima (@samuellimabraz) – Fix drop path in DinoV2 backbone
  • youthfrost (@youthfrost) – Fix segmentation results_mask.json saving
  • Jelle R. Dalenberg (@jrdalenberg) (LinkedIn) – Fix COCO hierarchy filtering
  • Abdul Mukit (@Abdul-Mukit) (LinkedIn) – Fix category contiguity and evaluation metric corruption on 1-indexed datasets
  • Jirka Borovec (@Borda) (LinkedIn) – Fix aug_config in YOLO dataset builder, max_mem telemetry, CI/testing infrastructure

Full Changelog: 1.5.1...1.5.2

RF-DETR 1.5.1: Nested transforms

27 Feb 10:22

Choose a tag to compare

🚀 Added

  • Nested Albumentations transforms. OneOf and Sequential containers now work correctly inside the augmentation pipeline. Probability settings on container transforms are ignored — they always fire, keeping composition predictable. Inference pipelines can also pass None targets so the same transform object works for both training and inference. (#752)

    from rfdetr import RFDETRSmall
    
    model = RFDETRSmall()
    model.train(
        dataset_dir="...",
        aug_config={
            "OneOf": [
                {"RandomBrightnessContrast": {"p": 0.5}},
                {"HueSaturationValue": {"p": 0.5}},
            ],
            "HorizontalFlip": {"p": 0.5},
        },
    )

🌱 Changed

  • Dataset transform pipeline now uses torchvision-native Compose, ToImage, and ToDtype instead of custom implementations. Normalize defaults to ImageNet mean/std. (#745)

🔧 Fixed

  • Fixed RFDETRMedium missing from the public API — __all__ contained a duplicate RFDETRSmall entry instead. (#748)
  • Fixed AR50_90 reporting an incorrect value in MetricsMLFlowSink due to a wrong COCO evaluation index. (#735)
  • Fixed supercategory filtering in _load_classes for COCO datasets with flat or mixed supercategory structures. (#744)
  • Fixed a crash in geometric transforms (flip, crop, etc.) when a sample contains zero-area / empty masks. (#727)
  • Fixed segmentation training on Colab — DepthwiseConvBlock now disables cuDNN for depthwise separable convolutions. (#728)
  • Locked onnxsim to <0.6.0 to prevent pip install from hanging indefinitely. (#749)

🏆 Contributors

A special welcome to our new contributors and a big thank you to everyone who helped with this release:

  • tillfri (@tillfri) – Fix AR50_90 metric index in MLflow sink
  • justin-alt-account (@justin-alt-account) – Fix RFDETRMedium missing from __all__
  • Jirka Borovec (@Borda) (LinkedIn) – Nested Albumentations support, transform pipeline refactor, mask fix, supercategory fix, onnxsim pin, CI/testing infrastructure

Full Changelog: 1.5.0...1.5.1

RF-DETR 1.5.0: Custom augmentations

23 Feb 16:01

Choose a tag to compare

🚀 Added

  • Custom augmentations via Albumentations. You can now control training augmentations through the aug_config parameter in train(). Pass a dictionary of Albumentations transforms, choose a built-in named preset, or disable augmentations entirely. Bounding boxes and segmentation masks are automatically transformed alongside images. (#263, #702)

    from rfdetr import RFDETRSmall
    from rfdetr.datasets.aug_config import AUG_CONSERVATIVE, AUG_AGGRESSIVE, AUG_AERIAL, AUG_INDUSTRIAL
    
    model = RFDETRSmall()
    
    # Use a built-in preset
    model.train(dataset_dir="...", aug_config=AUG_AGGRESSIVE, progress_bar=True)
    
    # Or define transforms explicitly
    model.train(
        dataset_dir="...",
        aug_config={
            "HorizontalFlip": {"p": 0.5},
            "RandomBrightnessContrast": {"brightness_limit": 0.2, "p": 0.4},
            "GaussianBlur": {"blur_limit": 3, "p": 0.2},
        },
        progress_bar=True,
    )
    
    # Disable all augmentations
    model.train(dataset_dir="...", aug_config={})
    Preset Best for
    AUG_CONSERVATIVE Small datasets (under 500 images)
    AUG_AGGRESSIVE Large datasets (2000+ images)
    AUG_AERIAL Satellite / overhead imagery
    AUG_INDUSTRIAL Manufacturing / inspection data
  • Save augmented training image samples. Enable save_dataset_grids=True in TrainConfig to write 3×3 JPEG grids of augmented training and validation images to your output directory before training begins, making it easy to verify your augmentation pipeline without running a full epoch. (#153)

    from rfdetr import RFDETRSmall
    
    model = RFDETRSmall()
    model.train(dataset_dir="...", save_dataset_grids=True, output_dir="output/")
    # Grids are saved to output/:
    #   train_batch0_grid.jpg, train_batch1_grid.jpg, train_batch2_grid.jpg
    #   val_batch0_grid.jpg,   val_batch1_grid.jpg,   val_batch2_grid.jpg
  • ClearML training logger. Set clearml=True in TrainConfig to stream per-epoch metrics directly to your ClearML project. (#520)

    from rfdetr import RFDETRSmall
    
    model = RFDETRSmall()
    model.train(dataset_dir="...", clearml=True)
  • MLflow training logger. Set mlflow=True in TrainConfig to log runs and metrics to MLflow, with support for custom tracking URIs and system metrics. (#109)

    from rfdetr import RFDETRSmall
    
    model = RFDETRSmall()
    model.train(dataset_dir="...", mlflow=True)
  • Progress bar for training and validation. A live progress bar now shows batch-level progress during training and validation, and on-screen logs are structured for easier reading. (#204)

  • device field added to TrainConfig, allowing explicit device selection when configuring training programmatically. (#687)

  • ModelConfig now raises an error on unknown parameters, preventing silent misconfiguration from typos or stale config keys. (#196)

  • TensorRT export guide. New documentation section covering how to convert an exported ONNX model to a TensorRT engine for maximum inference throughput. (#175)

🌱 Changed

  • OPEN_SOURCE_MODELS constant deprecated in favour of the ModelWeights enum for cleaner model weight references. (#696)
  • Added MD5 checksum validation for pretrained weight downloads, preventing silent use of corrupted files. (#679)

🔧 Fixed

  • Fixed Albumentations bool-mask crash that occurred during segmentation training. (#706)
  • Fixed UnboundLocalError when resuming training from a completed checkpoint. (#707)
  • Prevented corruption of checkpoint_best_total.pth via atomic checkpoint stripping. (#708)
  • Fixed PyTorch 2.9+ compatibility issue with CUDA capability detection. (#686)
  • Fixed dtype mismatch error when use_position_supervised_loss=True. (#447)
  • Fixed inconsistent return values from build_model. (#519)
  • Fixed positional_encoding_size type annotation from bool to int. (#524)
  • Fixed ONNX export output_names to include masks when exporting segmentation models. (#402)
  • Fixed num_select not being correctly updated during segmentation model fine-tuning. (#399)
  • Fixed np.argwherenp.argmax misuse. (#536)
  • Fixed COCO sparse category ID remapping logic for non-contiguous or offset category IDs are correctly handled. (#712)
  • Fixed segmentation mask filtering when using aggressive augmentations. (#717)

🏆 Contributors

A special welcome to our new contributors and a big thank you to everyone who helped with this release:

  • Panagiotis Moraitis (@panagiotamoraiti) (LinkedIn) – Custom Albumentations augmentation wrapper
  • Shubham Rajvanshi (@shubsraj) (LinkedIn) – Progress bar and structured training logs
  • Clement (@CorporalCleg) – ClearML logger integration
  • Lakshman (@lab176344) – MLflow logger integration
  • Mattia Di Giusto (@picjul) (LinkedIn) – Save augmented training image samples
  • Juan Cobos (@juan-cobos) – device field in TrainConfig
  • Ahmed Samir (@Ahmed-Samir11) – Error on unknown ModelConfig parameters
  • Dominik Baran (@Yozer) (LinkedIn) – Fix segmentation mask filtering with aggressive augmentations
  • Sungchul Kim (@sungchul2) (LinkedIn) – Fix num_select during segmentation fine-tuning
  • Abdul Mukit (@Abdul-Mukit) (LinkedIn) – Fix ONNX export output names for segmentation
  • Alarmod (@Alarmod) – PyTorch 2.9+ compatibility fix
  • lixiaolei1982 (@lixiaolei1982) – Fix build_model return values & positional_encoding_size type
  • kawabe-jiw (@kawabe-jiw) – Fix dtype mismatch with use_position_supervised_loss=True
  • Andrei Moraru (@AndreiMoraru123) (LinkedIn) – np.argwherenp.argmax fix
  • Niels Teunissen (@DatSplit) – TensorRT export documentation
  • stop1one (@stop1one) (LinkedIn) – Stabilize distributed training & test reliability
  • Jirka Borovec (@Borda) (LinkedIn) – Augmentation presets, MD5 weight validation, ModelWeights enum, CI/testing infrastructure, docs

RF-DETR 1.4.3

16 Feb 13:24

Choose a tag to compare

🐞 Fixed

  • Export: Fix deploy_to_roboflow seg model export (#578)

🛠️ Changed / Maintenance

  • Validation: Add MD5 validation for file downloads and pretrained weights handling (#679)
  • Testing & Benchmarking:
    • Add segmentation model benchmarks for COCO dataset (#684)
    • Adjust COCO inference stats/thresholds (#678)
  • Docs: Update license section in README.md

🏆 Contributors

A special welcome to our new contributors and a big thank you to everyone who helped with this release:


Full Changelog: 1.4.2...1.4.3

RF-DETR 1.4.2

12 Feb 00:46

Choose a tag to compare

🚀 Added

  • YOLO Support: Update YOLO dataset format support (#74)
  • Inference: Support for image URLs in prediction (#629)
  • Training:
    • Allow training on custom datasets without test splits when run_test=False (#628)
    • Add custom print-freq argument (#603)

🐞 Fixed

  • CLI: Fix error in CLI script (#246)
  • Export: Fix RFDETR-Seg ONNX Export failing (#626)
  • Training/Validation:
    • Modified misleading num_classes Warnings (#261)
    • Add F1 score assertions and clarify IoU threshold in tests (#596)
  • Dependencies: Hotfix: pin transformers dependency to version range >4.0.0, <5.0.0 (#599)

🛠️ Changed / Maintenance

  • Synthetic Data: Add synthetic dataset generation module and corresponding tests (#617)
  • Benchmarking:
    • Add COCO inference benchmarking tests (#652) and parametrization for multiple model sizes (#661, #662)
    • Add synthetic convergence benchmark test (#638)
  • Developer Experience:
    • Replace print statements with logger calls and remove unused imports (#158)
  • Refactoring:
    • Separate platform sub-module and migrate to rfdetr_plus package (#645)
  • Testing & CI:
    • Stabilize training tests and centralize seed logic with seed_all (#655)
  • Docs Formatting: Standardize documentation formatting and enable codespell/mdformat hooks (#634, #635, #637)

🏆 Contributors

A special welcome to our new contributors and a big thank you to everyone who helped with this release:


Full Changelog: 1.4.1...1.4.2