Skip to content

A unified library for object tracking featuring clean room re-implementations of leading multi-object tracking algorithms

License

Notifications You must be signed in to change notification settings

roboflow/trackers

trackers

trackers logo

version downloads license python-version

colab discord

Hello

trackers is a unified library offering clean room re-implementations of leading multi-object tracking algorithms. Its modular design allows you to easily swap trackers and integrate them with object detectors from various libraries like inference, ultralytics, or transformers.

Tracker Paper Year Status Colab
SORT arXiv 2016 colab
ByteTrack arXiv 2021 colab
OC-SORT arXiv 2022 🚧 🚧
BoT-SORT arXiv 2022 🚧 🚧
trackers-2.0.0-promo.mp4

Installation

Pip install the trackers package in a Python>=3.9 environment.

pip install trackers
install from source

By installing trackers from source, you can explore the most recent features and enhancements that have not yet been officially released. Please note that these updates are still in development and may not be as stable as the latest published release.

pip install git+https://github.com/roboflow/trackers.git

Benchmarks

Performance of the trackers on test splits from tracking datasets.

MOT17

Tracker HOTA IDF1 MOTA
SORT 58.4 69.9 67.2
ByteTrack 60.1 73.2 74.1

SportsMOT

Tracker HOTA IDF1 MOTA
SORT 70.9 68.9 95.7
ByteTrack 73.0 72.5 96.4

SoccerNet-tracking

Tracker HOTA IDF1 MOTA
SORT 81.6 76.2 95.1
ByteTrack 84.0 78.1 97.8

Quickstart

With a modular design, trackers lets you combine object detectors from different libraries with the tracker of your choice. Here's how you can use SORTTracker with various detectors:

import supervision as sv
from trackers import SORTTracker
from rfdetr import RFDETRMedium

tracker = SORTTracker()
model = RFDETRMedium(device="cuda")
annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)

def callback(frame, _):
    detections = model.predict(frame, threshold=NMS_THRESHOLD)
    detections = tracker.update(detections)
    return annotator.annotate(frame, detections, labels=detections.tracker_id)

sv.process_video(
    source_path="<INPUT_VIDEO_PATH>",
    target_path="<OUTPUT_VIDEO_PATH>",
    callback=callback,
)
run with inference
import supervision as sv
from trackers import SORTTracker
from inference import get_model

tracker = SORTTracker()
model = get_model(model_id="rfdetr-medium")
annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)

def callback(frame, _):
    result = model.infer(frame)[0]
    detections = sv.Detections.from_inference(result)
    detections = tracker.update(detections)
    return annotator.annotate(frame, detections, labels=detections.tracker_id)

sv.process_video(
    source_path="<INPUT_VIDEO_PATH>",
    target_path="<OUTPUT_VIDEO_PATH>",
    callback=callback,
)
run with ultralytics
import supervision as sv
from trackers import SORTTracker
from ultralytics import YOLO

tracker = SORTTracker()
model = YOLO("yolo11m.pt")
annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)

def callback(frame, _):
    result = model(frame)[0]
    detections = sv.Detections.from_ultralytics(result)
    detections = tracker.update(detections)
    return annotator.annotate(frame, detections, labels=detections.tracker_id)

sv.process_video(
    source_path="<INPUT_VIDEO_PATH>",
    target_path="<OUTPUT_VIDEO_PATH>",
    callback=callback,
)
run with transformers
import torch
import supervision as sv
from trackers import SORTTracker
from transformers import RTDetrV2ForObjectDetection, RTDetrImageProcessor

tracker = SORTTracker()
image_processor = RTDetrImageProcessor.from_pretrained("PekingU/rtdetr_v2_r18vd")
model = RTDetrV2ForObjectDetection.from_pretrained("PekingU/rtdetr_v2_r18vd")
annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)

def callback(frame, _):
    inputs = image_processor(images=frame, return_tensors="pt")
    with torch.no_grad():
        outputs = model(**inputs)

    h, w, _ = frame.shape
    results = image_processor.post_process_object_detection(
        outputs,
        target_sizes=torch.tensor([(h, w)]),
        threshold=0.5
    )[0]

    detections = sv.Detections.from_transformers(
        transformers_results=results,
        id2label=model.config.id2label
    )

    detections = tracker.update(detections)
    return annotator.annotate(frame, detections, labels=detections.tracker_id)

sv.process_video(
    source_path="<INPUT_VIDEO_PATH>",
    target_path="<OUTPUT_VIDEO_PATH>",
    callback=callback,
)

License

The code is released under the Apache 2.0 license.

Contribution

We welcome all contributions—whether it’s reporting issues, suggesting features, or submitting pull requests. Please read our contributor guidelines to learn about our processes and best practices.

About

A unified library for object tracking featuring clean room re-implementations of leading multi-object tracking algorithms

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 10

Languages