scale_lidar_io
scale_lidar_io.scene.LidarScene
- class LidarScene
Bases:
object
LidarScene object representing all frames in a scene.
- Scene properties:
cameras: List of cameras
frames: List of frames
base_url: Url used to host the data in S3
- Return type
object
- get_camera(camera_id=None, index: Optional[int] = None) scale_lidar_io.camera.LidarCamera
Get a camera by id (or index) or create one if it does not exist
- Parameters
camera_id (str, int) – The camera id
index (int) – The camera index
- Returns
LidarCamera
- Return type
- get_frame(frame_id=None, index: Optional[int] = None) scale_lidar_io.frame.LidarFrame
Get a frame by id (or index) or create one if it does not exist
- Parameters
frame_id (str, int) – The frame id
index (int) – The frame index
- Returns
LidarFrame
- Return type
- apply_transforms(world_transforms: List[scale_lidar_io.transform.Transform])
Apply transformations to all the frames (the number of Transformation should match the number of frames)
- Parameters
world_transforms (list(Transform)) – List of Transform
- filter_points(min_intensity=None, min_intensity_percentile=None)
Filter points based on intensity
- Parameters
min_intensity (int) – Minimun intensity allowed
min_intensity_percentile (int) – Minimun percentile allowed (use np.percentile)
- get_projected_image(camera_id, color_mode='intensity', frames_index=range(0, 1), **kwargs)
Get camera_id image with projected points, (Legacy method)
- Parameters
camera_id (str, int) – Camera id/Name/Identifier
color_mode (str) – Color mode, default
default
, modes are: ‘depth’, ‘intensity’ and ‘default’frames_index (range) – Project points for a range of frames, default first frame
- Returns
Image with points projected
- Return type
Image
- apply_transform(world_transform: scale_lidar_io.transform.Transform)
Apply a Transformation to all the frames
- Parameters
world_transform (Transform) – Transform to apply to all the frames
- make_transforms_relative()
Make all the frame transform relative to the first transform/frame. This will set the first transform to position (0,0,0) and heading (1,0,0,0)
- to_dict(base_url: Optional[str] = None) dict
Return a dictionary with the frame urls using the base_url as base.
- Parameters
base_url (str) – This url will be concatenated with the frames name, e.g.: ‘%s/frame-%s.json’ % (base_url, frame.id)
- Returns
Dictionary with the frame urls data
- Return type
dict
- s3_upload(bucket: str, path=None, mock_upload: float = False, use_threads: float = True)
Save scene in S3
- Parameters
bucket (str) – S3 Bucket name
path – Path to store data
mock_upload (float) – To avoid upload the data to S3 (defualt
False
)use_threads (float) – In order to upload multiple files at the same time using threads (defualt
True
)
- Returns
Scene S3 url
- Return type
str
- scale_file_upload(project_name: str, verbose: float = True)
Save scene in Scale file
- Parameters
project_name (str) – File project name
verbose (float) – Set to false to not show the progress bar
- Returns
Scene file url
- Return type
str
- save_task(filepath: str, template=None)
Save the entire scene (with frame and images) in zipfile format to local filepath
- Parameters
filepath (str) – File name and path in which the scene should be saved
- create_task(template: Optional[Dict] = None, task_type: scaleapi.tasks.TaskType = TaskType.LidarAnnotation) scaleapi.tasks.Task
Create a Scale platform task from the configured scene
- Parameters
template (dict) – Dictionary of payload for task creation (https://private-docs.scale.com/?python#parameters), attachments data will be filled automatically.
task_type (str) – Select a Scale API endpoint top upload data to, currently supports ‘lidarannotation’, ‘lidarsegmentation’, and ‘lidartopdown’. Defaults to ‘lidarannotation’.
- Returns
Task object with related information. Inherited from scaleapi.Task object.
- Return type
Task
scale_lidar_io.frame.LidarFrame
- class LidarFrame(frame_id, cameras)
Bases:
object
Frame object represents the point cloud, image and calibration data contained in a single frame
- Frame properties:
id: Frame id, used to identify the frame
cameras: List of LidarCamera
images: List of LidarImage
points: Pointcloud for this frame
radar_points: Radar points for this frame
colors: Colors for each point on the pointcloud for this frame
transform: Pose/ transform of this frame
- get_image(camera_id) scale_lidar_io.image.LidarImage
Get image by camera_id or create one if it does not exist
- Parameters
camera_id (str, int) – Camera id
- Returns
LidarImage object
- Return type
- add_points_from_connector(connector: scale_lidar_io.connectors.Importer, transform: Optional[scale_lidar_io.transform.Transform] = None, intensity=1, sensor_id=0)
Use Importer output to add points to the frame
- Parameters
connector (Importer) – Importer used to load the points
transform (Transform) – Transform that should be applied to the points
intensity (int) – If the points list does not include intensity, this value will be used as intensity for all the points (default
1
)sensor_id (int) – Sensor id, used in case that you have more than one lidar sensor. (Default
0
)
- add_radar_points(points: numpy.array)
Add radar points to the frame, structure:
radar_points = np.array([ [ [0.30694541, 0.27853175, 0.51152715], // position - x,y,z [0.80424087, 0.24164057, 0.45256181], // direction - x,y,z [0.73596422] // size ], ... ])
- Parameters
points (np.array) – List of radar points data
- add_points(points: numpy.array, transform: Optional[scale_lidar_io.transform.Transform] = None, intensity=1, sensor_id=0)
Add points to the frame, structure: np.array with dimension 1 and shape (N,3) or (N,4) (N being the number of point in the frame)
Points with intensity:
points = np.array([ [0.30694541, 0.27853175, 0.51152715, 0.4], [0.80424087, 0.24164057, 0.45256181, 1], ... ])
Points without intensity:
points = np.array([ [0.30694541, 0.27853175, 0.51152715], [0.80424087, 0.24164057, 0.45256181], ... ])
- Parameters
points (np.array) – List of points
transform (Transform) – Transform that should be applied to the points
intensity (int) – If the points list doesn’t include intensity, this value will be used as intensity for all the points (default
1
)sensor_id (int) – Sensor id, used in case that you have more than one lidar sensor. (Default
0
)
- add_colors(colors: numpy.ndarray)
Add colors to the pointcloud. This list should follow the same order as the point list.
Each color should be in RGB with values between 0 and 255.
colors = np.array([ [10, 200, 230], [0, 0, 255], ... ])
- Parameters
colors (np.ndarray) – List of colors
- add_debug_lines(intensity: int = 1, length: int = 5, device: int = 0)
Add debug lines.
This will add a line starting from each camera position to the direction it is facing. This will use the camera position in this frame.
- Parameters
intensity (int) – Intensity of the points from the debugging line, default
1
length (int) – Length of the line, default
5
pointsdevice (int) – Device id fror the points added, default
0
- get_world_points()
Return the list of points with the frame transformation applied
- Returns
List of points in world coordinates
- Return type
np.array
- get_projected_image(camera_id, color_mode: str = 'default', **kwargs)
Get camera_id image with projected points
- Parameters
camera_id (str, int) – Camera id/Name/Identifier
color_mode (str) – Color mode, default
default
, modes are: ‘depth’, ‘intensity’ and ‘default’
- Returns
Image with the points projected
- Return type
PIL.Image
- manual_calibration(camera_id, intrinsics_ratio: int = 1000, extrinsics_ratio: int = 10)
Open a window with the camera with the points projected over it. The window also display dials to change the camera intrinsic and extrinsic values. The new values for the camera calibration will be display as matrices on the terminal.
- Parameters
camera_id (str, int) – Camera id/Name/Identifier
intrinsics_ratio (int) – Range of possible values for the intrinsic, center value will be the current one.
extrinsics_ratio (int) – Range of possible values for the extrinsic, center value will be the current one.
- get_filename() str
Get frame json file name
- Returns
Json file name
- Return type
str
- apply_transform(T: scale_lidar_io.transform.Transform)
Apply the frame transformation. This will be used to define the device position and applied to cameras and points.
- Parameters
T (Transform) – Transform for this frame
- filter_points(min_intensity=None, min_intensity_percentile=None)
Filter points based on their intensity
- Parameters
min_intensity (int) – Minimun intensity allowed
min_intensity_percentile (int) – Minimun percentile allowed (use np.percentile)
- to_json(base_url: str = '', s3_upload: bool = True, project_name: str = '')
Return the frame data in json format following Scale data format: https://private-docs.scale.com/?python#sensor-fusion-lidar-annotatio.
This will return the final data from the frame, this means cameras and points will be in world coordinates.
- Parameters
base_url (str) – This url will concatenated with the image name, e.g.: ‘%s/image-%s-%s.jpg’ % (base_url, camera.id, frame.id)
- Returns
Frame object as a JSON formatted stream
- Return type
str
- save(path: str, base_url: str = '')
Save frame object in a json file
- Parameters
path (str) – Path in which the frame data should be saved
base_url (str) – This url will concatenated with the image name, e.g.: ‘%s/image-%s-%s.jpg’ % (base_url, camera.id, frame.id)
- s3_upload(bucket: str, path: str)
Save frame in S3
- Parameters
bucket (str) – S3 Bucket name
path – Path to store data
- scale_file_upload(project_name: str)
Save frame in Scale File
- Parameters
project_name – File project name
scale_lidar_io.camera.LidarCamera
- class LidarCamera(camera_id)
Bases:
object
Camera object that contains all the camera information
- Camera properties:
id = camera id/Name/Identifier, type: int, str
pose: Camera pose/extrinsic, type: Transform
world_poses: World poses, this will make the camera ignore the frame poses, type: list(Transform)
K: Intrinsic matrix
D: Camera distortion coefficients [k1,k2,p1,p2,k3,k4], default all set to
0
model: Camera model, default
brown_conrady
scale_factor: Camera scale factor, default
1
skew: Camera scale factor, default
0
Usefull extra documentation to understand better how this object works: https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html
- world2cam = R=[-90. 0. -90.] t=[0. 0. 0.]
- property fx
Camera X focal length
- Getter
Return camera’s X focal length
- Type
double
- property fy
Camera Y focal length
- Getter
Return camera’s Y focal length
- Type
double
- property cx
Camera X center point
- Getter
Return camera’s X center point
- Type
double
- property cy
Camera Y center point
- Getter
Return camera’s Y center point
- Type
double
- property intrinsic_matrix
Camera intrinsic/K
- Getter
Return camera’s intrinsic matrix
- Type
3x3 matrix
- property position: numpy.ndarray
Camera position
- Getter
Return camera’s position
- Setter
Set camera’s position
- Type
list(x,y,z)
- property rotation: numpy.ndarray
Camera rotation/heading
- Getter
Return camera’s rotation
- Setter
Set camera’s rotation
- Type
3x3 rotation matrix
- property world_transform: scale_lidar_io.transform.Transform
World transform/pose (to avoid frame pose)
- Getter
pose @ world2cam.T
- Setter
pose = transform @ world2cam
- Type
- property extrinsic_matrix
Camera extrinsic
- Getter
Return camera’s extrinsic matrix (pose.inverse[:3, :4])
- Setter
pose = Transform(matrix).inverse
- Type
3x4 matrix
- property projection_matrix
Projection matrix
- Getter
K @ extrinsic_matrix
- Setter
K, R, t, _, _, _, _ = cv2.decomposeProjectionMatrix(projection_matrix)
- Type
3x4 projection matrix
- calibrate(position=None, rotation=None, pose=None, extrinsic_matrix=None, projection_matrix=None, K=None, D=None, model=None, scale_factor=None, skew=None, world_transform=None, world_poses=None, **kwargs)
Helper for camera calibration
- Parameters
position (list(int)) – Camera position [x, y, z]
rotation (rotation matrix) – Camera rotation/heading
pose (Transform) – Camera pose (position + rotation)
extrinsic_matrix (matrix 4x4) – Extrinsic 4x4 matrix (world to camera transform) (pose = Transform(matrix).inverse)
projection_matrix (matrix 3x4) – 3x4 projection matrix (K, R, t, _, _, _, _ = cv2.decomposeProjectionMatrix(projection_matrix))
K (matrix 3x3) – Intrinsic 3x3 matrix
D (list(double)) – Distortion values following this order: [k1,k2,p1,p2,k3,k4,k5,k6], required [k1,k2,p1,p2,k3,k4]
model (str) – Camera model
scale_factor (int) – Image scale_factor
skew (int) – Camera skew coefficient
world_transform (Transform) – Overwrite camera pose with the world transform (pose = transform @ world2cam)
world_poses (list(Transform)) – World poses, this will make the camera ignore the frame poses
- Keyword Arguments
fx (str) – Focal length in X
fy (str) – Focal length in Y
cx (str) – Center point in X
cy (str) – Center point in Y
k1 (double) – Distortion value k1
k2 (double) – Distortion value k2
k3 (double) – Distortion value k3
k4 (double) – Distortion value k4
k5 (double) – Distortion value k5
k6 (double) – Distortion value k6
p1 (double) – Distortion value p1
p2 (double) – Distortion value p2
- apply_transform(transform: scale_lidar_io.transform.Transform)
Apply transformation to the camera (transform @ pose)
- Parameters
transform (Transform) – Transform to apply to the object
- rotate(angles, degrees=True)
Rotate the camera, (pose = Transform.from_euler(angles, degrees=degrees) @ pose)
- Parameters
angles (list(float)) – Angles to rotate (x,y,z)
degrees (boolean) – Use rad or degrees
- translate(vector)
Move the camera, (pose = Transform(angles, degrees=degrees) @ pose)
- Parameters
vector (list(float)) – [x,y,z]
- project_points(points: numpy.ndarray, use_distortion=False)
Return array of projected points based on camera calibration values
When
use_distortion=True
it uses: cv.fisheye.projectPoints( objectPoints, rvec, tvec, K, D[, imagePoints[, alpha[, jacobian]]] )
- Parameters
points (list(float)) – list of points
use_distortion (boolean) – For fisheye/omni cameras (not necesary for cameras like Brown-Conrady)
- get_projected_image(image, points, frame_transform, color_mode='default', oversample=3)
Return image with points projected onto it
- Parameters
image (PIL.Image) – Camera image
points (list(float)) – list of points/pointcloud
frame_transform (Transform) – Frame transform/pose
color_mode (str) – Color mode, default
default
, modes are: ‘depth’, ‘intensity’ and ‘default’oversample (int) – Padding on projected points, this is used to project points outside the image, it’s useful for debugging, default
3
= 3 times the image size
- Returns
Image with points projected
- Return type
PIL.Image
scale_lidar_io.image.LidarImage
- class LidarImage(camera)
Bases:
object
LidarImage objects represent an image with a LidarCamera reference.
- LidarImage properties:
camera: Camera id
image_path: Image path
transform: Transformation apply to LidarImage (will be used as: LidarImage.transform or LidarFrame.transform) @ camera.pose)
metadata: Metadata related to the image
timestamp: Timestamp
- load_file(file: str)
Set LidarImage image_path (Legacy method)
- Parameters
file (str) – Set image path
- save_pil_image(pil_image: PIL.Image.Image)
Save image in image_path
- Parameters
pil_image (PIL.Image) – Image to save
- get_image() <module 'PIL.Image' from '/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/PIL/Image.py'>
Open LidarImage
- Returns
Image.open
- as_array() numpy.asarray
Get the image as numpy array
- Returns
image as numpy array
- Return type
np.asarray
- set_scale(scale_factor: float)
Change image scale and save in image_path
- Parameters
scale_factor (float) – Scale factor
- set_brightness(factor: float)
Change image brightness and save in image_path (will use PIL.ImageEnhance.Brightness)
- Parameters
factor – Brightness factor
- save(target_file: str)
Save image in target_file path
- Parameters
target_file (str) – Path in which the image should be saved
- s3_upload(bucket: str, key: str)
Save image in S3
- Parameters
bucket (str) – S3 Bucket name
key (str) – file name
- scale_file_upload(project_name: str)
Save image in Scale File
- Parameters
project_name – File project name
scale_lidar_io.transform.Transform
- class Transform(value=None)
Bases:
object
Transform object represent a rigid transformation matrix (rotation and translation).
Transform is a 4x4 matrix, although it could be instance using (16,1), (3,4), (3,3) or (3,1) matrixes. Note: not all the methods from Transform will work using scaled/small matrixes
[ [ r00, r01, r02, t0], [ r10, r11, r12, t1], [ r20, r21, r22, t2], [ 0, 0, 0, 1] ]
- static from_Rt(R, t)
Create a transform based on a rotation and a translation components.
- Parameters
R (Quaternion, list) – Rotation matrix or quaternion.
t (list) – Translation component
- Returns
Transform created based on the components
- Return type
- static from_euler(angles, axes='sxyz', degrees=False)
Create a transform from euler angles
- Parameters
angles (list) – Values of the rotation per axis
axes (str) – Order of the axis (default
sxyz
)degrees (boolean) – Use degrees or radians values (default
False
= radians)
- Returns
Transform created from euler angles
- Return type
- static from_transformed_points(A, B)
Create a transform from two points
- Parameters
A (list) – Point A (x,y,z)
B (list) – Point B (x,y,z)
- Returns
Transform created from the angles
- Return type
- static random()
Create a transform from random rotation and translation
- Returns
Transform created based on the angles
- Return type
- property quaternion
Transform rotation as quaternion
- Getter
Return transform’s rotation as quaternion
- Type
Quaternion
- property rotation
Transform rotation
- Getter
Return transform’s rotation
- Setter
Set transform rotation, could use a 3x3 matrix or a Quaternion
- Type
3x3 matrix
- property position
Transform position/translation
- Getter
Return transform’s position
- Setter
Set transform’s position list(3x1)
- Type
list
- property translation
Transform position/translation
- Getter
Return transform’s position
- Setter
Set transform’s position list(3x1)
- Type
list
- property euler_angles
Transform rotation in euler angles
- Getter
Return transform’s rotaiton in euler angles
- Type
list
- property euler_degrees
Transform rotation in euler degrees
- Getter
Return transform’s rotaiton in euler degrees
- Type
list
- apply(points)
Apply transform to a list of points
- Parameters
points – List of points (N,3) or (N,4)
- Returns
List of points witht the transform applied
- Return type
list
scale_lidar_io.connectors.Importer
- class Importer
Bases:
object
Points importer/helper
- class Base(fp: str)
Bases:
object
Abstract importer class to be inherited for data type specific implementation
Constructor to be called for preparing filepaths
- Parameters
fp (str) – Relative or absolute path to file to be loaded in explicit load method
- load(**kwargs) None
- property data
- class CSV(fp: str)
Bases:
scale_lidar_io.connectors.Importer.Base
Metaclass for specific CSV Importer implementations. Due to non-standardized csv format, it is recommended to write more specific implementations on a case-by-case basis.
Constructor to be called for preparing filepaths
- Parameters
fp (str) – Relative or absolute path to file to be loaded in explicit load method
- class OrderedCSV(fp: str)
Bases:
scale_lidar_io.connectors.Importer.CSV
Expects a csv file with or without header, but assumes correct order of columns / values as follows: [x, y, z, i, d] Cuts off any exceeding column count.
Constructor to be called for preparing filepaths
- Parameters
fp (str) – Relative or absolute path to file to be loaded in explicit load method
- class NamedCSV(fp: str)
Bases:
scale_lidar_io.connectors.Importer.CSV
Expects a csv file with header row and column names matching x, y, z, i, d Dismisses any columns not matching any of the expected names. Case-sensitive.
Constructor to be called for preparing filepaths
- Parameters
fp (str) – Relative or absolute path to file to be loaded in explicit load method
- class PCD(fp: str)
Bases:
scale_lidar_io.connectors.Importer.Base
Uses open3d library to read pcd files, expects [x, y, z], dismisses other columns.
Constructor to be called for preparing filepaths
- Parameters
fp (str) – Relative or absolute path to file to be loaded in explicit load method
- class LAS(fp: str)
Bases:
scale_lidar_io.connectors.Importer.Base
Uses Laspy library to read .las file and expects properties x, y, z, intensity to be present.
Constructor to be called for preparing filepaths
- Parameters
fp (str) – Relative or absolute path to file to be loaded in explicit load method
Tasks
- class LidarAnnotationTask(param_dict, client)
Bases:
scaleapi.tasks.Task
Lidar annotation Task object
- scene: scale_lidar_io.scene.LidarScene = None
- static from_scene(scene: scale_lidar_io.scene.LidarScene, template=None, client=None)
Load scene data and convert it into a LidarAnnotation Task format
- Parameters
scene (LidarScene) – Scene to load
template (dict) – Template/payload to use get fetch the Scale API
client (scaleapi.ScaleClient) – ScaleClient object, by default it will load your SCALE_API_KEY from you env vars and set a client automatically.
- Returns
LidarAnnotationTask object
- Return type
- static from_id(task_id: str)
Get LidarAnnotation task from a task id
- Parameters
task_id (str) – Task id
- Returns
LidarAnnotationTask object created based on the task id data
- Return type
- get_annotations()
Get annotations/response from a completed LidarAnnotation task
- Returns
Annotations
- Return type
dict
- get_cuboid_positions_by_frame()
Get a list of each cuboid position in each frames (from a completed task)
- Returns
List of cuboids positions
- Return type
list
- publish(task_type: scaleapi.tasks.TaskType = TaskType.LidarAnnotation)
Publish/create a task, request Scale API with the LidarAnnotation data
- Parameters
task_type – Task type to create, default
lidarannotation
- Rtype task_type
scaleapi.tasks.TaskType
- Returns
Task object creation from the response of the API call
- Return type
scaleapi.tasks.Task
- class LidarTopDownTask(param_dict, client)
Bases:
scaleapi.tasks.Task
Lidar top-down Task object
- scene: scale_lidar_io.scene.LidarScene = None
- static from_scene(scene: scale_lidar_io.scene.LidarScene, template=None, client=None)
Load scene data and convert it into a LidarTopDown Task format
- Parameters
scene (LidarScene) – Scene to load
template (dict) – Template/payload to use get fetch the Scale API
client (scaleapi.ScaleClient) – ScaleClient object, by default it will load your SCALE_API_KEY from you env vars and set a client automatically.
- Returns
LidarTopDownTask object
- Return type
- static from_id(task_id: str)
Get LidarTopDown task from a task id
- Parameters
task_id (str) – Task id
- Returns
LidarTopDownTask object created based on the task id data
- Return type
- publish(task_type: scaleapi.tasks.TaskType = TaskType.LidarTopdown)
Publish/create a task, request Scale API with the LidarTopDown data
- Parameters
task_type – Task type to create, default
lidartopdown
- Rtype task_type
scaleapi.tasks.TaskType
- Returns
Task object creation from the response of the API call
- Return type
scaleapi.tasks.Task
- class LidarSegmentationTask(param_dict, client)
Bases:
scaleapi.tasks.Task
Lidar segmentation Task object
- scene: scale_lidar_io.scene.LidarScene = None
- static from_scene(scene: scale_lidar_io.scene.LidarScene, template=None, client=None)
Load scene data and convert it into a LidarSegmentation Task format
- Parameters
scene (LidarScene) – Scene to load
template (dict) – Template/payload to use get fetch the Scale API
client (scaleapi.ScaleClient) – ScaleClient object, by default it will load your SCALE_API_KEY from you env vars and set a client automatically.
- Returns
LidarSegmentationTask object
- Return type
- static from_id(task_id: str)
Get LidarSegmentation task from a task id
- Parameters
task_id (str) – Task id
- Returns
LidarSegmentationTask object created based on the task id data
- Return type
- publish(task_type: scaleapi.tasks.TaskType = TaskType.LidarSegmentation)
Publish/create a task, request Scale API with the LidarSegmentation data
- Parameters
task_type – Task type to create, default
lidarsegmentation
- Rtype task_type
scaleapi.tasks.TaskType
- Returns
Task object creation from the response of the API call
- Return type
scaleapi.tasks.Task