Skip to content

Releases: roboflow/supervision

supervision-0.7.0

10 May 22:43
Compare
Choose a tag to compare

πŸš€ Added

  • Detections.from_yolo_nas to enable seamless integration with YOLO-NAS model. (#91)
  • Ability to load datasets in YOLO format using Dataset.from_yolo. (#86)
  • Detections.merge to merge multiple Detections objects together. (#84)

🌱 Changed

  • LineZoneAnnotator.annotate to allow for the custom text for the in and out tags. (#44)

πŸ› οΈ Fixed

  • LineZoneAnnotator.annotate does not return annotated frame. (#81)

πŸ† Contributors

supervision-0.6.0

19 Apr 21:08
2945013
Compare
Choose a tag to compare

πŸš€ Added

  • Initial Dataset support and ability to save Detections in Pascal VOC XML format. (#71)
  • New mask_to_polygons, filter_polygons_by_area, polygon_to_xyxy and approximate_polygon utilities. (#71)
  • Ability to load Pascal VOC XML object detections dataset as Dataset. (#72)

🌱 Changed

  • order of Detections attributes to make it consistent with order of objects in __iter__ tuple. (#70)
  • generate_2d_mask to polygon_to_mask. (#71)

πŸ† Contributors

supervision-0.5.2

13 Apr 09:10
7497f8d
Compare
Choose a tag to compare

πŸ› οΈ Fixed

  • Fixed LineZone.trigger function expects 4 values instead of 5 (#63)

πŸ† Contributors

supervision-0.5.1

12 Apr 16:00
Compare
Choose a tag to compare

πŸ› οΈ Fixed

  • Fixed Detections.__getitem__ method did not return mask for selected item.
  • Fixed Detections.area crashed for mask detections.

πŸ† Contributors

supervision-0.5.0

10 Apr 22:07
dba4d9f
Compare
Choose a tag to compare

πŸš€ Added

  • Detections.mask to enable segmentation support. (#58)
  • MaskAnnotator to allow easy Detections.mask annotation. (#58)
  • Detections.from_sam to enable native Segment Anything Model (SAM) support. (#58)

🌱 Changed

  • Detections.area behaviour to work not only with boxes but also with masks. (#58)

πŸ† Contributors

supervision-0.4.0

05 Apr 15:33
bc12a8e
Compare
Choose a tag to compare

πŸš€ Added

  • Detections.empty to allow easy creation of empty Detections objects. (#48)
  • Detections.from_roboflow to allow easy creation of Detections objects from Roboflow API inference results. (#56)
  • plot_images_grid to allow easy plotting of multiple images on single plot. (#56)
  • Initial support for Pascal VOC XML format with detections_to_voc_xml method. (#56)

🌱 Changed

  • show_frame_in_notebook refactored and renamed to plot_image. (#56)

πŸ† Contributors

supervision-0.3.2

24 Mar 16:49
2df4261
Compare
Choose a tag to compare

🌱 Changed

  • Drop requirement for class_id in sv.Detections (#50) to make it more flexible

πŸ† Contributors

supervision-0.3.1

14 Mar 13:46
Compare
Choose a tag to compare

🌱 Changed

  • Detections.wth_nms support class agnostic and non-class agnostic case (#36)

πŸ› οΈ Fixed

  • PolygonZone throws an exception when the object touches the bottom edge of the image (#41)
  • Detections.wth_nms method throws an exception when Detections is empty (#42)

πŸ† Contributors

supervision-0.3.0

08 Mar 09:49
ac16582
Compare
Choose a tag to compare

πŸš€ Added

New methods in sv.Detections API:

  • from_transformers - convert Object Detection πŸ€— Transformer result into sv.Detections
  • from_detectron2 - convert Detectron2 result into sv.Detections
  • from_coco_annotations - convert COCO annotation into sv.Detections
  • area - dynamically calculated property storing bbox area
  • with_nms - initial implementation (only class agnostic) of sv.Detections NMS

🌱 Changed

  • Make sv.Detections.confidence field Optional.

πŸ† Contributors

supervision-0.2.0

07 Feb 22:11
2e76e3d
Compare
Choose a tag to compare

πŸ”ͺ Killer features

  • Support for PolygonZone and PolygonZoneAnnotator πŸ”₯
πŸ‘‰ Code example
import numpy as np
import supervision as sv
from ultralytics import YOLO

# initiate polygon zone
polygon = np.array([
    [1900, 1250],
    [2350, 1250],
    [3500, 2160],
    [1250, 2160]
])
video_info = sv.VideoInfo.from_video_path(MALL_VIDEO_PATH)
zone = sv.PolygonZone(polygon=polygon, frame_resolution_wh=video_info.resolution_wh)

# initiate annotators
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
zone_annotator = sv.PolygonZoneAnnotator(zone=zone, color=sv.Color.white(), thickness=6, text_thickness=6, text_scale=4)

# extract video frame
generator = sv.get_video_frames_generator(MALL_VIDEO_PATH)
iterator = iter(generator)
frame = next(iterator)

# detect
model = YOLO('yolov8s.pt')
results = model(frame, imgsz=1280)[0]
detections = sv.Detections.from_yolov8(results)
detections = detections[detections.class_id == 0]
zone.trigger(detections=detections)

# annotate
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
labels = [f"{model.names[class_id]} {confidence:0.2f}" for _, confidence, class_id, _ in detections]
frame = box_annotator.annotate(scene=frame, detections=detections, labels=labels)
frame = zone_annotator.annotate(scene=frame)

supervision-0-2-0

  • Advanced vs.Detections filtering with pandas-like API.
detections = detections[(detections.class_id == 0) & (detections.confidence > 0.5)]
  • Improved integration with YOLOv5 and YOLOv8 models.
import torch
import supervision as sv

model = torch.hub.load('ultralytics/yolov5', 'yolov5x6')
results = model(frame, size=1280)
detections = sv.Detections.from_yolov5(results)
from ultralytics import YOLO
import supervision as sv

model = YOLO('yolov8s.pt')
results = model(frame, imgsz=1280)[0]
detections = sv.Detections.from_yolov8(results)

πŸš€ Added

  • supervision.get_polygon_center function - takes in a polygon as a 2-dimensional numpy.ndarray and returns the center of the polygon as a Point object
  • supervision.draw_polygon function - draw a polygon on a scene
  • supervision.draw_text function - draw a text on a scene
  • supervision.ColorPalette.default() - class method - to generate default ColorPalette
  • supervision.generate_2d_mask function - generate a 2D mask from a polygon
  • supervision.PolygonZone class - to define polygon zones and validate if supervision.Detections are in the zone
  • supervision.PolygonZoneAnnotator class - to draw supervision.PolygonZone on scene

🌱 Changed

  • VideoInfo API - change the property name resolution -> resolution_wh to make it more descriptive; convert VideoInfo to dataclass
  • process_frame API - change argument name frame -> scene to make it consistent with other classes and methods
  • LineCounter API - rename class LineCounter -> LineZone to make it consistent with PolygonZone
  • LineCounterAnnotator API - rename class LineCounterAnnotator -> LineZoneAnnotator

πŸ† Contributors