Releases: roboflow/supervision
Releases Β· roboflow/supervision
supervision-0.7.0
π Added
Detections.from_yolo_nas
to enable seamless integration with YOLO-NAS model. (#91)- Ability to load datasets in YOLO format using
Dataset.from_yolo
. (#86) Detections.merge
to merge multipleDetections
objects together. (#84)
π± Changed
LineZoneAnnotator.annotate
to allow for the custom text for the in and out tags. (#44)
π οΈ Fixed
LineZoneAnnotator.annotate
does not return annotated frame. (#81)
π Contributors
supervision-0.6.0
π Added
- Initial
Dataset
support and ability to saveDetections
in Pascal VOC XML format. (#71) - New
mask_to_polygons
,filter_polygons_by_area
,polygon_to_xyxy
andapproximate_polygon
utilities. (#71) - Ability to load Pascal VOC XML object detections dataset as
Dataset
. (#72)
π± Changed
- order of
Detections
attributes to make it consistent with order of objects in__iter__
tuple. (#70) generate_2d_mask
topolygon_to_mask
. (#71)
π Contributors
supervision-0.5.2
supervision-0.5.1
π οΈ Fixed
- Fixed
Detections.__getitem__
method did not return mask for selected item. - Fixed
Detections.area
crashed for mask detections.
π Contributors
supervision-0.5.0
π Added
Detections.mask
to enable segmentation support. (#58)MaskAnnotator
to allow easyDetections.mask
annotation. (#58)Detections.from_sam
to enable native Segment Anything Model (SAM) support. (#58)
π± Changed
Detections.area
behaviour to work not only with boxes but also with masks. (#58)
π Contributors
supervision-0.4.0
π Added
Detections.empty
to allow easy creation of emptyDetections
objects. (#48)Detections.from_roboflow
to allow easy creation ofDetections
objects from Roboflow API inference results. (#56)plot_images_grid
to allow easy plotting of multiple images on single plot. (#56)- Initial support for Pascal VOC XML format with
detections_to_voc_xml
method. (#56)
π± Changed
show_frame_in_notebook
refactored and renamed toplot_image
. (#56)
π Contributors
supervision-0.3.2
supervision-0.3.1
supervision-0.3.0
π Added
New methods in sv.Detections
API:
from_transformers
- convert Object Detection π€ Transformer result intosv.Detections
from_detectron2
- convert Detectron2 result intosv.Detections
from_coco_annotations
- convert COCO annotation intosv.Detections
area
- dynamically calculated property storing bbox areawith_nms
- initial implementation (only class agnostic) ofsv.Detections
NMS
π± Changed
- Make
sv.Detections.confidence
fieldOptional
.
π Contributors
supervision-0.2.0
πͺ Killer features
- Support for
PolygonZone
andPolygonZoneAnnotator
π₯
π Code example
import numpy as np
import supervision as sv
from ultralytics import YOLO
# initiate polygon zone
polygon = np.array([
[1900, 1250],
[2350, 1250],
[3500, 2160],
[1250, 2160]
])
video_info = sv.VideoInfo.from_video_path(MALL_VIDEO_PATH)
zone = sv.PolygonZone(polygon=polygon, frame_resolution_wh=video_info.resolution_wh)
# initiate annotators
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
zone_annotator = sv.PolygonZoneAnnotator(zone=zone, color=sv.Color.white(), thickness=6, text_thickness=6, text_scale=4)
# extract video frame
generator = sv.get_video_frames_generator(MALL_VIDEO_PATH)
iterator = iter(generator)
frame = next(iterator)
# detect
model = YOLO('yolov8s.pt')
results = model(frame, imgsz=1280)[0]
detections = sv.Detections.from_yolov8(results)
detections = detections[detections.class_id == 0]
zone.trigger(detections=detections)
# annotate
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
labels = [f"{model.names[class_id]} {confidence:0.2f}" for _, confidence, class_id, _ in detections]
frame = box_annotator.annotate(scene=frame, detections=detections, labels=labels)
frame = zone_annotator.annotate(scene=frame)
- Advanced
vs.Detections
filtering with pandas-like API.
detections = detections[(detections.class_id == 0) & (detections.confidence > 0.5)]
- Improved integration with
YOLOv5
andYOLOv8
models.
import torch
import supervision as sv
model = torch.hub.load('ultralytics/yolov5', 'yolov5x6')
results = model(frame, size=1280)
detections = sv.Detections.from_yolov5(results)
from ultralytics import YOLO
import supervision as sv
model = YOLO('yolov8s.pt')
results = model(frame, imgsz=1280)[0]
detections = sv.Detections.from_yolov8(results)
π Added
supervision.get_polygon_center
function - takes in a polygon as a 2-dimensionalnumpy.ndarray
and returns the center of the polygon as a Point objectsupervision.draw_polygon
function - draw a polygon on a scenesupervision.draw_text
function - draw a text on a scenesupervision.ColorPalette.default()
- class method - to generate defaultColorPalette
supervision.generate_2d_mask
function - generate a 2D mask from a polygonsupervision.PolygonZone
class - to define polygon zones and validate ifsupervision.Detections
are in the zonesupervision.PolygonZoneAnnotator
class - to drawsupervision.PolygonZone
on scene
π± Changed
VideoInfo
API - change the property nameresolution
->resolution_wh
to make it more descriptive; convertVideoInfo
todataclass
process_frame
API - change argument nameframe
->scene
to make it consistent with other classes and methodsLineCounter
API - rename classLineCounter
->LineZone
to make it consistent withPolygonZone
LineCounterAnnotator
API - rename classLineCounterAnnotator
->LineZoneAnnotator