Label comprehensive 3D scenes from LiDAR or RADAR sensors with additional photo and video context, AI object tracking and point cloud segmentation.
Annotation of point cloud scenes is not an easy task. Inspired by professional software like Blender, Supervisely offers user-friendly instruments to work with thousands of points.
Cubiods |
Lasso |
Landmarks |
---|---|---|
Visualization and, especially, labeling of spatial point clouds is not a simple task. Apart from plain and well-understood image labeling, to successfully complete annotation project in 3D space we need to solve three additional challenges and provide:
- User-friendly navigation in three dimensions.
- Handy tools for accurate object detection.
- Maximum information for correct classification.
Before you can identify and label an object of interest you need to see it clear from every angle. To let some sunlight into the scene, we have introduced widely known navigation from video games with WASD keyboard controls to move around point cloud and mouse to control the camera angle.
Along with additional viewports with top-side-front perspectives using orthographic projections, it gives accurate representation of what you are dealing with.
WASD and mouse navigation |
Top-Side-Front viewports |
---|---|
1M+ points with GPU-acceleration and rendering options |
Colormaps |
---|---|
One cannot put a 3D box sitting behind a 2D monitor screen — that’s why having multiple viewports for editing a 3D box in multiple projections is a key to accurate annotation.
Accurate 3D box labeling in top-side-front projections |
Roll, pitch and yaw heading angle |
---|---|
Auto frustum culling to highlight the actual object |
---|
Provide more information for accurate labeling and identification with photo and video context. Supervisely automatically calculates correlation between 3D space and 2D context and projects your labeled objects on it, letting you achieve unprecedented quality of labeling.
Object projection on cameras |
Multiple camera views |
---|---|
Move camera to the same position as you see on a photo |
---|
In many tasks such as autonomous cars localization and mapping (SLAM) and lane detection you have to label not just a single point cloud, but a series of clouds, called episodes. Supervisely provides labeling toolbox specially designed for such a case.
Just like as specially designed video labeling toolbox is remarkably more performant than annotation of separate frames in image toolbox, specially designed 3D episodes toolbox is surpassingly more excellent in every aspect from playback speed to tracking performance.
It’s easy to get frustrated with hundreds of clouds and objects labeled.
Episode timeline panel provides overall structure, answers questions like what is already labeled and simplifies editing tag segments and tracked objects.
A common task is labeling of a moving object across multiple point clouds. Supervisely offers built-in AI tracking algorithms that automatically detects and tracks objects of various classes, as well as more classic linear interpolation and other methods of object tracking.
Tracking algorithms |
Annotation objects built with hundreds boxes |
---|---|
Merge multiple objects together |
Split object by frames |
---|---|
Copy and paste objects between point clouds |
Color highlighting of IDs |
---|---|
3D semantic segmentation of cloud points is quite challenging. But with the right tools Supervisely provides, classifying of even the most spatial point clouds becomes much easier.
Attach additional information to annotation objects or just tag specific segments of 3D point cloud episode with semantic, such as “what’s going on here”.