AIPHAProcessing
- class AIPHAProcessing.AIPHAProcessing(client, processing_folder='processing', worker_instance_type='P2', manager_instance_type='small')
Bases:
object
AIPHAProcessing class for data processing with AIPHA API.
- create_operator_from_path(path, extension='', is_folder_level='__auto__')
Create an “operator” pointing to a path.
- Parameters:
path – path to file or folder
extension – extension of file
is_folder_level – if True, the operator will be executed massive parallel for each file in a folder level, if False, the operator will be executed on file level, if ‘__auto__’, the operator will be executed on folder level if the path is a folder, otherwise on file level
- execute()
Execute the call stack.
- class fvo(outer_class)
Bases:
object
Namespace for fvo functions.
- align_top(input_path='__auto__', target_path='__auto__', output_path='__auto__', extension_input_file='.laz', extension_target_file='.laz', extension_output_file='.laz', folder_parallel_processing='__auto__')
Zero centering of XYZ points in a LAZ file
- Parameters:
input_path – input LAZ folder folder
target_path – input LAZ folder folder
output_path – output LAZ folder folder
- connect_neighbouring_vertices_unassigned(input_path='__auto__', input_vertices='vertices.laz', output_path='__auto__', max_line_distance=0.5, max_line_distance_corner=0.23, min_samples=3, extension_input_file='.laz', extension_output_file='.dxf', folder_parallel_processing='__auto__')
Connect neighbouring vertices in a point cloud
- Parameters:
input_path – Input folder with all 3D points
input_vertices – Input vertices
output_path – Output model
max_line_distance – Maximum distance between line and points to be considered as inlier
max_line_distance_corner – Maximum distance between vertices and points to be considered as inlier
min_samples – Minimum number of points to fit a line
- estimate_vobject_coordinates(path_source_in='__auto__', path_trafo_out='__auto__', path_source_out='__auto__', extension_file_source_in='.laz', extension_file_trafo_out='.txt', extension_file_source_out='.laz', folder_parallel_processing='__auto__')
estimate vobject coordinates
- Parameters:
path_source_in – input folder data
path_trafo_out – output trafo
path_source_out – output folder data
- evaluate_model(data_is='is.laz', model_is='is.dxf', model_target='__auto__', vertex_target_distance=0.5, extension_model_target='.dxf', folder_parallel_processing='__auto__')
Compare two 3D models
- Parameters:
data_is – input file
model_is – input file
model_target – output folder
vertex_target_distance – Distance threshold for corner point suppression.
- export_vertices(in_path='__auto__', out_path='__auto__', extension_in_file='.laz', extension_out_file='.dxf', folder_parallel_processing='__auto__')
Export a 3D model from a point cloud
- Parameters:
in_path – input folder with edges encoded in the intensity field.
out_path – output folder
- filter_valid_vertices(input_path='__auto__', input_path2='__auto__', input_path_features='__auto__', output_path='__auto__', output_path_features='__auto__', min_distance=0.0, max_distance=100, extension_input_file='.laz', extension_input_file2='.laz', extension_input_file_features='.npy', extension_output_file='.laz', extension_output_file_features='.npy', folder_parallel_processing='__auto__')
Filter valid vertices from a DXF file.
- Parameters:
input_path – Input laz or txt folder to filter.
input_path2 – Input laz or txt folder as reference.
input_path_features – Input features folder.
output_path – Output laz or txt folder.
output_path_features – Output features folder.
min_distance – Minimum distance.
max_distance – Maximum distance.
- import_vertices(in_path='__auto__', layer='-1', out_path='__auto__', extension_in_file='.dxf', extension_out_file='.laz', folder_parallel_processing='__auto__')
Extract visible face3d vertices from a DXF file.
- Parameters:
in_path – Input DXF folder folder
layer – Layer names as comma-separated list
out_path – Output folder folder
- likelihood(input_path='__auto__', points_path='__auto__', output_path='__auto__', max_distance=0.5, missing_distance=1.5, missing_knn=2, extension_input_file='.laz', extension_points_file='.laz', extension_output_file='.laz', folder_parallel_processing='__auto__')
Compute class conditional probability distribution
- Parameters:
input_path – input LAZ folder folder
points_path – input LAZ folder folder
output_path – output LAZ folder folder
max_distance – probability max distance
missing_distance – interpolate missing points distance
missing_knn – interpolate number missing neighbours
- model_swap_axis(input_path='__auto__', output_path='__auto__', extension_input_file='.dxf', extension_output_file='.dxf', folder_parallel_processing='__auto__')
model swap axis
- Parameters:
input_path – input dxf folder folder
output_path – output dxf folder folder
- optimize_model_graph(in_dxf_path='__auto__', in_point_cloud_path='__auto__', out_dxf_path='__auto__', max_distance=0.35, num_iterations=4, extension_in_dxf_file='.dxf', extension_in_point_cloud_file='.laz', extension_out_dxf_file='.dxf', folder_parallel_processing='__auto__')
Optimize the graph of a 3D model.
- Parameters:
in_dxf_path – input folder
in_point_cloud_path – input folder
out_dxf_path – output folder
max_distance – Maximum distance between a point and a graph node for the point to be considered a candidate for merging with the graph node.
num_iterations – Number of iterations to run the optimization.
- simplify_model(in_path='__auto__', out_path='__auto__', layers='1', distance=0.25, extension_in_file='.dxf', extension_out_file='.dxf', folder_parallel_processing='__auto__')
Simplify a 3D model
- Parameters:
in_path – input folder
out_path – output folder
layers – layers to be processed
distance – Distance threshold for corner point suppression.
- zero_centering(input_path='__auto__', output_path='__auto__', extension_input_file='.laz', extension_output_file='.laz', folder_parallel_processing='__auto__')
Zero centering of XYZ points in a LAZ file
- Parameters:
input_path – input LAZ folder folder
output_path – output LAZ folder folder
- class image(outer_class)
Bases:
object
Namespace for image functions.
- assign_georeference(georeferenced_path='__auto__', unreferenced_path='__auto__', output_path='__auto__', extension_georeferenced_file='.tif', extension_unreferenced_file='.tif', extension_output_file='.tif', folder_parallel_processing='__auto__')
Assign georeference from a georeferenced image to an unreferenced image.
- Parameters:
georeferenced_path – Georeferenced image folder
unreferenced_path – Unreferenced image folder
output_path – Output georeferenced image folder
- canny_edge_detection(input_path='__auto__', output_path='__auto__', sigma=1.0, low_threshold=0.1, high_threshold=0.2, values_subset='', extension_input_file='.tif', extension_output_file='.tif', folder_parallel_processing='__auto__')
Perform Canny edge detection on a georeferenced image and save the detected edges as a raster.
- Parameters:
input_path – Input image folder
output_path – Output edge raster folder
sigma – Standard deviation of the Gaussian filter
low_threshold – Low threshold for hysteresis
high_threshold – High threshold for hysteresis
values_subset – Subset of values to extract contours from, all values by default
- extract_contours(input_path='__auto__', value=0.5, output_path='__auto__', values_subset='', extension_input_file='.tif', extension_output_file='.shp', folder_parallel_processing='__auto__')
Extract contours from a georeferenced image and save them as a shapefile.
- Parameters:
input_path – Input georeferenced image folder
value – Contour value
output_path – Output shapefolder
values_subset – Subset of contour values, default is all values
- image_metadata(input_path='__auto__', output_path='__auto__', extension_input_file='.tif', extension_output_file='.json', folder_parallel_processing='__auto__')
Obtain metadata of a georeferenced image and save it as a JSON file.
- Parameters:
input_path – Input georeferenced image folder
output_path – Output JSON folder
- image_to_matrix(input_path='__auto__', output_path='__auto__', extension_input_file='.tif', extension_output_file='.npy', folder_parallel_processing='__auto__')
Convert an image to a matrix.
- Parameters:
input_path – Input image folder
output_path – Output matrix folder (either .npy or .txt)
- matrix_to_image(input_path='__auto__', output_path='__auto__', data_type='uint8', extension_input_file='.npy', extension_output_file='.tif', folder_parallel_processing='__auto__')
Convert a matrix to an image.
- Parameters:
input_path – Input matrix folder (either .npy or .txt)
output_path – Output image folder
data_type – Data type of the output image
- polygon_to_image(geotiff_path='__auto__', pickle_path='__auto__', output_path='__auto__', extension_geotiff_file='.tif', extension_pickle_file='.pickle', extension_output_file='.tif', folder_parallel_processing='__auto__')
Generate an image of a multipolygon filled inside.
- Parameters:
geotiff_path – Geotiff folder with size and resolution information
pickle_path – Pickle folder containing the Shapely polygon
output_path – Output georeferenced TIFF image
- resize_image(input_path='__auto__', output_path='__auto__', new_grid_size=1.0, compression='None', extension_input_file='.tif', extension_output_file='.tif', folder_parallel_processing='__auto__')
resize image
- Parameters:
input_path – Input georeferenced image folder
output_path – Output georeferenced image folder
new_grid_size – New grid size in meters
compression – Compression method (e.g., deflate, lzw)
- retile_images(path_reference='.', path_to_retile='.', output_path='out1', extension_ref='.tif', extension_ret='.tif', folder_parallel_processing='__auto__')
retile images
- Parameters:
path_reference – Reference folder with image dimensions and geolocations that should be used for retiling
path_to_retile – Folder with images that should be retiled to match reference
output_path – Folder with retiled images
extension_ref – file extension
extension_ret – file extension
- class ml3d(outer_class)
Bases:
object
Namespace for ml3d functions.
- evaluate_semantic_segmentation(prediction_path='__auto__', ground_truth_path='__auto__', class_names='1,2,3,4', invalid_label=0, extension_prediction_path='.labels', extension_ground_truth_path='.labels', folder_parallel_processing='__auto__')
Evaluate semantic segmentation
- Parameters:
prediction_path – Path to prediction folder or folder
ground_truth_path – Path to ground truth folder or folder
class_names – class names
invalid_label – Invalid label
- knn_classification(in_path_to_points='__auto__', in_path_from_points='__auto__', out_path_labels='__auto__', out_path_probs='__auto__', k=3, max_distance=1.0, to_points_names='X,Y,Z', from_point_names='X,Y,Z', from_class_name='classification', extension_in_path_to_points='.laz', extension_in_path_from_points='.laz', extension_out_path_labels='.labels', extension_out_path_probs='.npy', folder_parallel_processing='__auto__')
knn classification
- Parameters:
in_path_to_points – input point cloud to be labeled
in_path_from_points – input reference point cloud
out_path_labels – out class labels
out_path_probs – out class probabilities
k – number of neighbors
max_distance – maximum distance
to_points_names – names of points to be labeled
from_point_names – names of reference points
from_class_name – name of reference classification
- semantic_inference_pt_v2m2(data_in_path='__auto__', in_model_parameters_path='trained_model/model_ptv2m2', out_label_path='__auto__', out_probability_path='__auto__', class_names='1,2,3,4,5,6,7,8', feature_names='red,green,blue', point_names='X,Y,Z', label_name='classification', resolution=0.05, number_of_votes=5, extension_data_in_path='.laz', extension_out_label_path='.labels', extension_out_probability_path='.npy', folder_parallel_processing='__auto__')
PT v2m2 Inference
- Parameters:
data_in_path – folder that contains the test data
in_model_parameters_path – path to model
out_label_path – folder that contains the results
out_probability_path – folder that contains the results
class_names – comma separated list of class names. Class 0 is always given and is used to denote unlabeled points.
feature_names – comma separated list of features that are provided
point_names – comma separated list of point identifiers in (las/laz)
label_name – label name for (las/laz)
resolution – resolution of the subsampled point cloud
number_of_votes – number of votes
- semantic_inference_pt_v3m1(data_in_path='__auto__', in_model_parameters_path='trained_model/model_ptv2m2', out_label_path='__auto__', out_probability_path='__auto__', class_names='1,2,3,4,5,6,7,8', feature_names='red,green,blue', point_names='X,Y,Z', label_name='classification', resolution=0.05, number_of_votes=5, extension_data_in_path='.laz', extension_out_label_path='.labels', extension_out_probability_path='.npy', folder_parallel_processing='__auto__')
PT v3m1 Inference
- Parameters:
data_in_path – folder that contains the test data
in_model_parameters_path – path to model
out_label_path – folder that contains the results
out_probability_path – folder that contains the results
class_names – comma separated list of class names. Class 0 is always given and is used to denote unlabeled points.
feature_names – comma separated list of features that are provided
point_names – comma separated list of point identifiers in (las/laz)
label_name – label name for (las/laz)
resolution – resolution of the subsampled point cloud
number_of_votes – number of votes
- semantic_inference_rfcr(data_in_path='__auto__', results_labels_path='__auto__', results_probabilities_path='__auto__', in_model_parameters_path='results/Log_2022-11-10_11-42-05', number_of_votes=5, feature_names='red,green,blue', point_names='x,y,z', extension_data_in_path='.laz', extension_results_labels_path='.labels', extension_results_probabilities_path='.npy', folder_parallel_processing='__auto__')
semantic inference rfcr
- Parameters:
data_in_path – folder to data
results_labels_path – folder to labels
results_probabilities_path – folder to probabilities
in_model_parameters_path – path to model
number_of_votes – number of votes to vote for a class
feature_names – comma separated list of features that are provided
point_names – comma separated list of point identifiers in (las/laz)
- semantic_inference_scf(data_in_path='__auto__', class_names='1,2,3,4,5,6,7,8', feature_names='red,green,blue', point_names='x,y,z', label_name='classification', feature_dimensions='12,48,96,192,384', batch_size=2, results_labels_path='__auto__', in_model_parameters_path='results/Log_2022-11-10_11-42-05', results_probabilities_path='__auto__', number_of_votes=5, extension_data_in_path='.laz', extension_results_labels_path='.labels', extension_results_probabilities_path='.npy', folder_parallel_processing='__auto__')
semantic inference scf
- Parameters:
data_in_path – folder to folder that contains the training data
class_names – comma separated list of class names. Class 0 is always given and is used to denote unlabeled points.
feature_names – comma separated list of features that are provided
point_names – comma separated list of point identifiers in (las/laz)
label_name – label name for (las/laz)
feature_dimensions – feature dimensions
batch_size – batch_size
results_labels_path – folder to labels
in_model_parameters_path – path to model
results_probabilities_path – folder to probabilities
number_of_votes – number of votes to vote for a class
- semantic_inference_spunet(data_in_path='__auto__', in_model_parameters_path='trained_model/model_1', out_label_path='__auto__', out_probability_path='__auto__', class_names='1,2,3,4,5,6,7,8', feature_names='red,green,blue', point_names='X,Y,Z', label_name='classification', resolution=0.05, channels='32,64,128,256,256,128,96,96', layers='2,3,4,6,2,2,2,2', number_of_votes=5, extension_data_in_path='.laz', extension_out_label_path='.labels', extension_out_probability_path='.npy', folder_parallel_processing='__auto__')
Spunet Inference
- Parameters:
data_in_path – folder that contains the test data
in_model_parameters_path – path to model
out_label_path – folder that contains the results
out_probability_path – folder that contains the results
class_names – comma separated list of class names. Class 0 is always given and is used to denote unlabeled points.
feature_names – comma separated list of features that are provided
point_names – comma separated list of point identifiers in (las/laz)
label_name – label name for (las/laz)
resolution – resolution of the subsampled point cloud
channels – comma separated list of channels
layers – comma separated list of layers
number_of_votes – number of votes
- semantic_training_pt_v2m2(data_in_path='__auto__', out_model_parameters_path='trained_model/model_ptv2m2', class_names='1,2,3,4,5,6,7,8', feature_names='red,green,blue', point_names='X,Y,Z', label_name='classification', resolution=0.05, max_epochs=500, learning_rate=0.01, batch_size=10, final_div_factor=100, div_factor=10, weight_decay=0.005, extension_data_in_path='', folder_parallel_processing='__auto__')
Pt v2m2 Training
- Parameters:
data_in_path – folder to folder that contains the training data
out_model_parameters_path – path to model
class_names – comma separated list of class names. Class 0 is always given and is used to denote unlabeled points.
feature_names – comma separated list of features that are provided
point_names – comma separated list of point identifiers in (las/laz)
label_name – label name for (las/laz)
resolution – resolution of the subsampled point cloud
max_epochs – maximum number of epochs
learning_rate – learning rate
batch_size – batch size
final_div_factor – final div factor for learning rate
div_factor – div factor for learning rate
weight_decay – weight decay
- semantic_training_pt_v3m1(data_in_path='__auto__', out_model_parameters_path='trained_model/model_ptv2m2', class_names='1,2,3,4,5,6,7,8', feature_names='red,green,blue', point_names='X,Y,Z', label_name='classification', resolution=0.05, max_epochs=500, learning_rate=0.01, batch_size=10, final_div_factor=100, div_factor=10, weight_decay=0.005, extension_data_in_path='', folder_parallel_processing='__auto__')
Pt v3m1 Training
- Parameters:
data_in_path – folder to folder that contains the training data
out_model_parameters_path – path to model
class_names – comma separated list of class names. Class 0 is always given and is used to denote unlabeled points.
feature_names – comma separated list of features that are provided
point_names – comma separated list of point identifiers in (las/laz)
label_name – label name for (las/laz)
resolution – resolution of the subsampled point cloud
max_epochs – maximum number of epochs
learning_rate – learning rate
batch_size – batch size
final_div_factor – final div factor for learning rate
div_factor – div factor for learning rate
weight_decay – weight decay
- semantic_training_rfcr(data_in_path='__auto__', out_model_parameters_path='trained_model/model_1', class_names='1,2,3,4,5,6,7,8', feature_names='red,green,blue', point_names='x,y,z', label_name='classification', resolution=0.06, max_epochs=500, learning_rate=0.01, batch_size=10, learning_rate_decay=0.1, learning_momentum=0.98, learning_gradient_clip_norm=100, first_features_dim=128, extension_data_in_path='', folder_parallel_processing='__auto__')
semantic training rfcr
- Parameters:
data_in_path – folder to folder that contains the training data
out_model_parameters_path – path to model
class_names – comma separated list of class names. Class 0 is always given and is used to denote unlabeled points.
feature_names – comma separated list of features that are provided
point_names – comma separated list of point identifiers in (las/laz)
label_name – label name for (las/laz)
resolution – resolution of the subsampled point cloud
max_epochs – maximum number of epochs
learning_rate – learning rate
batch_size – batch size
learning_rate_decay – learning rate decay
learning_momentum – learning momentum
learning_gradient_clip_norm – learning gradient clip threshold
first_features_dim – first features dimension
- semantic_training_scf(data_in_path='__auto__', out_model_parameters_path='trained_model/model_1', class_names='1,2,3,4,5,6,7,8', feature_names='red,green,blue', point_names='x,y,z', label_name='classification', max_epochs=500, learning_rate=0.01, learning_rate_decay=0.95, feature_dimensions='16,64,128,256,512', batch_size=2, extension_data_in_path='', folder_parallel_processing='__auto__')
semantic training scf
- Parameters:
data_in_path – folder to folder that contains the training data
out_model_parameters_path – path to model
class_names – comma separated list of class names. Class 0 is always given and is used to denote unlabeled points.
feature_names – comma separated list of features that are provided
point_names – comma separated list of point identifiers in (las/laz)
label_name – label name for (las/laz)
max_epochs – maximum number of epochs
learning_rate – learning rate
learning_rate_decay – learning rate decay
feature_dimensions – feature dimensions
batch_size – batch_size
- semantic_training_spunet(data_in_path='__auto__', out_model_parameters_path='trained_model/model_1', class_names='1,2,3,4,5,6,7,8', feature_names='red,green,blue', point_names='X,Y,Z', label_name='classification', resolution=0.05, max_epochs=500, learning_rate=0.01, batch_size=10, final_div_factor=100, div_factor=10, weight_decay=0.005, channels='32,64,128,256,256,128,96,96', layers='2,3,4,6,2,2,2,2', extension_data_in_path='', folder_parallel_processing='__auto__')
Spunet Training
- Parameters:
data_in_path – folder to folder that contains the training data
out_model_parameters_path – path to model
class_names – comma separated list of class names. Class 0 is always given and is used to denote unlabeled points.
feature_names – comma separated list of features that are provided
point_names – comma separated list of point identifiers in (las/laz)
label_name – label name for (las/laz)
resolution – resolution of the subsampled point cloud
max_epochs – maximum number of epochs
learning_rate – learning rate
batch_size – batch size
final_div_factor – final div factor for learning rate
div_factor – div factor for learning rate
weight_decay – weight decay
channels – comma separated list of channels
layers – comma separated list of layers
- universal_inference(in_paths='__auto__', out_paths='__auto__', in_model_path='parameters_model', extension_in_files='', extension_out_files='', folder_parallel_processing='__auto__')
universal inference
- Parameters:
in_paths – input folders with training data
out_paths – output folders with training data
in_model_path – model path
- universal_training(in_path='data_train', out_model_path='parameters_model_test', voxel_size=0.02, zero_centering='True', point_names='X,Y,Z', feature_names='', label_names='classification', num_classes=1, label_scales='0.01', learning_rate=3e-06, learning_decay=0.9999, num_epochs=200000, regularization_decay=1e-09, batch_size=2, save_after_epochs=100, backbone_type='MinkUNet14A', head_type='HeadPointwise', criterion_type='L1Sum', probabilistic='True', hidden_layers=8, store_in_memory='True', folder_parallel_processing='__auto__')
universal training
- Parameters:
in_path – input directory with training data
out_model_path – model path
voxel_size – voxel size
zero_centering – zero centering
point_names – point names
feature_names – feature names
label_names – label names
num_classes – number of classes
label_scales – label scales
learning_rate – learning rate
learning_decay – learning rate decay
num_epochs – number of epochs
regularization_decay – regularization decay
batch_size – batch size for training
save_after_epochs – save after epochs
backbone_type – model type of backbone network
head_type – model type of head network
criterion_type – model type of criterion
probabilistic – estimate probabilities: labels in [0,1]
hidden_layers – number of hidden layers
store_in_memory – store training data in memory
- vertices_estimation_inference(in_paths='__auto__', out_paths='__auto__', in_model_path='parameters_model_test', batch_size=1, extension_in_files='', extension_out_files='', folder_parallel_processing='__auto__')
vertices estimation inference
- Parameters:
in_paths – input folders or directory with training data
out_paths – output folders containing the vertices
in_model_path – model path
batch_size – batch size for training
- vertices_estimation_training(in_path='data_train', in_vertices_path='data_train_vertices', out_model_path='parameters_model_test', voxel_size=0.02, zero_centering='True', point_names='X,Y,Z', feature_names='', label_names='classification', num_classes=1, label_scales='0.01', learning_rate=1e-05, learning_decay=0.999, num_epochs=2000000, regularization_decay=1e-09, batch_size=5, save_after_epochs=100, backbone_type='MinkUNet14A', head_type_prob='HeadPointwise', criterion_type_prob='BCEMean', hidden_layers=8, max_interpolation_distance=0.75, dist_threshold=0.35, score_threshold=0.4, point_estimation_layers=3, point_estimation_channels=8, criterion_type_point='L1Mean', weight_pred=1.0, weight_prob=2.0, weight_reconstruction=4.0, probabilistic='True', folder_parallel_processing='__auto__')
vertices estimation training
- Parameters:
in_path – input directory with training data
in_vertices_path – input directory with corresponding vertex data
out_model_path – model path
voxel_size – voxel size
zero_centering – zero centering
point_names – point names
feature_names – feature names
label_names – label names
num_classes – number of classes
label_scales – label scales
learning_rate – learning rate
learning_decay – learning rate decay
num_epochs – number of epochs
regularization_decay – regularization decay
batch_size – batch size for training
save_after_epochs – save after epochs
backbone_type – model type of backbone network
head_type_prob – model type of head network
criterion_type_prob – model type of criterion
hidden_layers – number of hidden layers
max_interpolation_distance – maximum distance to interpolate occluded points
dist_threshold – distance threshold for non-maximum suppression
score_threshold – score threshold for non-maximum suppression
point_estimation_layers – number of hidden layers for point estimation
point_estimation_channels – number of channels for point estimation
criterion_type_point – model type of criterion for point estimation
weight_pred – weight for point estimation
weight_prob – weight for probability estimation
weight_reconstruction – weight for reconstruction estimation
probabilistic – probabilistic
- wireframe_estimation_inference(in_paths='__auto__', out_result_paths='__auto__', in_model_path='parameters_wireframe', knn_line=15, mode_wireframe_estimation='knn unassigned', num_votes=10, rotation_axis='z', extension_in_files='', extension_out_result_files='', folder_parallel_processing='__auto__')
[hidden] Wireframe estimation inference
- Parameters:
in_paths – input folders or directory with training data
out_result_paths – output folders containing the wireframes
in_model_path – model path
knn_line – knn line
mode_wireframe_estimation – mode for wireframe estimation
num_votes – number of votes for wireframe estimation
rotation_axis – rotation axis
- wireframe_estimation_training(in_path='data_train', in_wireframe_path='data_train_wireframe', out_model_path='parameters_wireframe_14A_bce_interpolation', voxel_size=0.02, zero_centering='True', point_names='X,Y,Z', feature_names='', label_names='classification', num_classes=1, label_scales='0.01', learning_rate=5e-06, learning_decay=0.999, num_epochs=2000000, regularization_decay=1e-10, batch_size=5, save_after_epochs=1, backbone_type='MinkUNet14A', head_type_prob='HeadPointwise', criterion_type_prob='BCEMean', hidden_layers=8, max_interpolation_distance=0.75, dist_threshold=0.35, score_threshold=0.5, point_estimation_layers=3, point_estimation_channels=32, criterion_type_point='L1Mean', wireframe_criterion_type='BCEMean', wireframe_estimation_layers=3, wireframe_estimation_channels=32, weight_pred=2, weight_prob=6.5, weight_reconstruction=4.5, weight_wireframe=9, knn_line=10, distance_line=0.3, probabilistic='True', store_in_memory='True', mode_wireframe_estimation='knn', maximum_wireframe_samples=2500, wireframe_subsampling=5, wireframe_extrapolation_sampling=2, only_train_wireframe='False', folder_parallel_processing='__auto__')
[hidden] wireframe estimation training
- Parameters:
in_path – input directory with training data
in_wireframe_path – input directory with corresponding wireframe data
out_model_path – model path
voxel_size – voxel size
zero_centering – zero centering
point_names – point names
feature_names – feature names
label_names – label names
num_classes – number of classes
label_scales – label scales
learning_rate – learning rate
learning_decay – learning rate decay
num_epochs – number of epochs
regularization_decay – regularization decay
batch_size – batch size for training
save_after_epochs – save after epochs
backbone_type – model type of backbone network
head_type_prob – model type of head network
criterion_type_prob – model type of criterion
hidden_layers – number of hidden layers
max_interpolation_distance – maximum distance to interpolate occluded points
dist_threshold – distance threshold for non-maxima suppression
score_threshold – score threshold for non-maxima suppression
point_estimation_layers – number of hidden layers for point estimation
point_estimation_channels – number of channels for point estimation
criterion_type_point – model type of criterion for point estimation
wireframe_criterion_type – model type of criterion for wireframe estimation
wireframe_estimation_layers – number of hidden layers for wireframe estimation
wireframe_estimation_channels – number of channels for wireframe estimation
weight_pred – weight for point estimation
weight_prob – weight for probability estimation
weight_reconstruction – weight for reconstruction
weight_wireframe – weight for wireframe estimation
knn_line – number of nearest neighbours for line estimation
distance_line – distance threshold for line estimation
probabilistic – probabilistic
store_in_memory – store in memory
mode_wireframe_estimation – wireframe mode
maximum_wireframe_samples – maximum number of wireframe samples
wireframe_subsampling – wireframe subsampling factor
wireframe_extrapolation_sampling – wireframe extrapolation sampling factor
only_train_wireframe – only train wireframe
- class ops3d(outer_class)
Bases:
object
Namespace for ops3d functions.
- align_points(path_source_in='segmented_object', path_transformation_in='transformations', path_source_out='aligned_points', folder_parallel_processing='__auto__')
align points
- Parameters:
path_source_in – input folder data
path_transformation_in – input folder transformation
path_source_out – output folder
- assign_point_labels(path_source_in='__auto__', path_labels_in='__auto__', path_source_out='__auto__', dtype='classification', all_type='', extension_file_source_in='.laz', extension_file_labels_in='.npy', extension_file_source_out='.laz', folder_parallel_processing='__auto__')
assign point labels
- Parameters:
path_source_in – input folder data
path_labels_in – input folder labels
path_source_out – output folder
dtype – value
all_type – values to load
- crop_and_merge_polygons(point_cloud_paths='__auto__', polygon_path='__auto__', output_path='__auto__', extension_point_cloud_files='.laz', extension_polygon_file='.pickle', extension_output_file='.laz', folder_parallel_processing='__auto__')
crop and merge polygons
- Parameters:
point_cloud_paths – Input folder folder for the point clouds
polygon_path – Input folder folder for the polygon (pickle)
output_path – Output folder folder for the cropped point cloud
- crop_circle(in_path='__auto__', out_path='__auto__', latitude=1, longitude=1, lat_lon_path='__auto__', radius=75, cols='', max_num_processes=0, extension_in_file='.laz', extension_out_file='.laz', extension_lat_lon_file='.laz', folder_parallel_processing='__auto__')
crop circle
- Parameters:
in_path – input folder
out_path – output folder
latitude – latitude
longitude – longitude
lat_lon_path – (optional) folder with lat lon coordinates
radius – radius for cropping
cols – columns to be used, leave empty for all
max_num_processes – maximum number of processes
- crop_points_to_polygon(in_points_path='__auto__', in_polygon_path='__auto__', out_path='__auto__', cols_in='', extension_in_points_file='.laz', extension_in_polygon_file='.pickle', extension_out_file='.laz', folder_parallel_processing='__auto__')
crop points to polygon
- Parameters:
in_points_path – Input folder folder for the point cloud
in_polygon_path – Input folder folder for the polygon (pickle)
out_path – Output folder folder for the cropped point cloud
cols_in – columns to load
- crop_to_equal_value_range(path1_in='segmented_object1', path2_in='segmented_object2', path1_out='crop_relative_height1', path2_out='crop_relative_height2', reference='max', axis=2, max_num_processes=0, folder_parallel_processing='__auto__')
crop to equal value range
- Parameters:
path1_in – input folder
path2_in – input folder
path1_out – output folder
path2_out – output folder
reference – [max, min, same]: same value range relative to maximum point [max], relative to minimum point [min] or absolute coordinates [same]
axis – axis to crop values
max_num_processes – Number of parallel processes
- density_based_clustering(pathname='__auto__', cluster_id_pathname='__auto__', cluster_centers_pathname='__auto__', wireframe_pathname='__auto__', epsilon=0.25, min_samples=0, dim=3, wireframe='False', extension_filename='.laz', extension_cluster_id_filename='.npy', extension_cluster_centers_filename='.laz', extension_wireframe_filename='.npy', folder_parallel_processing='__auto__')
Density-based Point Cloud Clustering
- Parameters:
pathname – Input .laz folder folder
cluster_id_pathname – Output cluster IDs folder folder
cluster_centers_pathname – Output cluster centers .laz folder folder
wireframe_pathname – Output wireframe connections folder folder
epsilon – DBSCAN epsilon
min_samples – DBSCAN min_samples
dim – Point dimension
wireframe – Whether to compute wireframe connections
- filter_label_disagreement_knn(path_points_in='__auto__', path_labels_in='__auto__', path_label_disagrement_in='__auto__', path_label_disagrement_out='__auto__', distance=2, classes_to_compare='2', comparison_type='2', class_to_filter=1, dim_data=3, knn=2, comparison_axis=-1, invalid_label=0, extension_file_points_in='.laz', extension_file_labels_in='.npy', extension_file_label_disagrement_in='.npy', extension_file_label_disagrement_out='.npy', folder_parallel_processing='__auto__')
filter label disagreement knn
- Parameters:
path_points_in – input folder [.laz or .las]
path_labels_in – input folder [.txt or .npy]
path_label_disagrement_in – input folder[.txt or .npy]
path_label_disagrement_out – output folder [.txt or .npy]
distance – distance threshold
classes_to_compare – classes to compare, comma separated
comparison_type – [ge: greater equal, le: less equal]
class_to_filter – class to filter
dim_data – Dimensions to use: 3: x,y,z; 2: x, y
knn – k-nearest-neighbours
comparison_axis – axis to compare: -1: eucledian distance; 0, 1 or 2: distance along x, y or z axis
invalid_label – invalid label
- filter_label_noise(path_in_data='__auto__', path_in_labels='__auto__', path_out='__auto__', k_nearest_neighbours=5, sigma=10.0, dim=3, invalid_label=0, extension_file_in_data='.laz', extension_file_in_labels='.labels', extension_file_out='.laz', folder_parallel_processing='__auto__')
filter label noise
- Parameters:
path_in_data – input folder data
path_in_labels – input folder labels
path_out – output folder
k_nearest_neighbours – k nearest neighbours
sigma – sigma
dim – dim
invalid_label – invalid class label
- fit_line_model(path_in='segmented_object', path_out='fit_line_model', residual_threshold=30.05, min_samples=2, max_trials=1, max_dim=3, max_num_processes=0, folder_parallel_processing='__auto__')
fit line model
- Parameters:
path_in – input folder
path_out – output folder
residual_threshold – maximum quantile
min_samples – minimum quantile
max_trials – maximum number of trials
max_dim – max_dim 0: x, 1: y, 3: z
max_num_processes – Number of parallel processes
- get_bounding_box(in_path='__auto__', dimension=3, out_path='__auto__', extension_in_file='.laz', extension_out_file='.npy', folder_parallel_processing='__auto__')
Get bounding box from las or laz file
- Parameters:
in_path – Input .laz folder folder
dimension – Dimension of the point cloud
out_path – Output bounding box folder folder
- get_meta_data(in_path='__auto__', out_path='__auto__', extension_in_file='.laz', extension_out_file='.json', folder_parallel_processing='__auto__')
Get meta data from las or laz file
- Parameters:
in_path – Input .laz folder folder
out_path – Output meta data folder folder
- get_point_values(path_source_in='__auto__', path_labels_out='__auto__', dtype='classification', decomposed_labels='True', extension_file_source_in='.laz', extension_file_labels_out='.txt', folder_parallel_processing='__auto__')
get point values
- Parameters:
path_source_in – input folder [.laz or .las]
path_labels_out – output folder [.txt or .npy]
dtype – type
decomposed_labels – type
- iterative_closest_point(path_source_in='__auto__', path_target_in='__auto__', path_source_out='__auto__', path_trafo_out='__auto__', metric='point2point', threshold=0.2, max_correspondences=5, extension_file_source_in='.laz', extension_file_target_in='.laz', extension_file_source_out='.laz', extension_file_trafo_out='.txt', folder_parallel_processing='__auto__')
iterative closest point
- Parameters:
path_source_in – input source folder
path_target_in – input target folder
path_source_out – output folder
path_trafo_out – output transformation
metric – [max, min, same]: same value range relative to maximum point [max], relative to minimum point [min] or absolute coordinates [same]
threshold – threshold to crop values
max_correspondences – threshold max nearest neighbours
- iterative_outlier_removal(path_in='segmented_object', path_out='iterative_outlier_removal', decay_factor=0.98, iteration_count=10, max_num_processes=0, folder_parallel_processing='__auto__')
iterative outlier removal
- Parameters:
path_in – input folder
path_out – output folder
decay_factor – maximum quantile
iteration_count – minimum quantile
max_num_processes – Number of parallel processes
- make_laz_from_values(path_values_in='__auto__', path_points_out='__auto__', dtype='X,Y,Z', scale='0.01,0.01,0.01', point_format=7, extension_file_values_in='.npy', extension_file_points_out='.laz', folder_parallel_processing='__auto__')
make laz from values
- Parameters:
path_values_in – input folder data
path_points_out – output folder
dtype – data channels
scale – scale value
point_format – point format
- make_line_model_from_points(path_in='segmented_object', path_out='vobject_coordinates3D', dim=3, max_num_processes=0, folder_parallel_processing='__auto__')
make line model from points
- Parameters:
path_in – input folder data
path_out – output folder
dim – dimension
max_num_processes – maximum number of processes
- point_cloud_to_dsm(path_points_in='__auto__', path_dsm_out='__auto__', path_dtm_out='__auto__', path_chm_out='__auto__', grid_size=0.5, extension_file_points_in='.laz', extension_file_dsm_out='.tif', extension_file_dtm_out='.tif', extension_file_chm_out='.tif', folder_parallel_processing='__auto__')
point cloud to dsm
- Parameters:
path_points_in – input points
path_dsm_out – dsm folder
path_dtm_out – dtm folder
path_chm_out – chm folder
grid_size – grid size
- quantile_filter(path_in='segmented_object', path_out='quantile_filterd', max_quantile=0.995, min_quantile=0.3, axis=2, max_num_processes=0, folder_parallel_processing='__auto__')
quantile filter
- Parameters:
path_in – input folder
path_out – output folder
max_quantile – maximum quantile
min_quantile – minimum quantile
axis – axis 0: x, 1: y, 2: z
max_num_processes – Number of parallel processes
- retile_generate_grid_globally(in_paths='__auto__', dimension=3, grid_size='20,20,50', offset_factor=0.0, reference_point='', out_path_tiles='__auto__', out_path_mapping_slice_point_cloud='slices', out_path_mapping_point_cloud_to_tiles='mapping_point_cloud_to_tiles', out_path_mapping_tiles_to_point_cloud='mapping_tiles_to_point_cloud', extension_in_paths='.laz', extension_out_path_tiles='', folder_parallel_processing='__auto__')
Create grid for retileing point clouds over multiple georeferenced point clouds
- Parameters:
in_paths – folder to laz folders to be retiled
dimension – Dimension to be retiled (x,y) or (x,y,z)
grid_size – Grid size for retileing
offset_factor – Offset factor for grid generation
reference_point – Reference point for grid generation, empty for default (min_x, min_y, min_z)
out_path_tiles – Output bounding box / tiles folder
out_path_mapping_slice_point_cloud – Output path for mapping that contains the point clouds (including neighbouring point clouds) that are used to generate slices from point cloud x
out_path_mapping_point_cloud_to_tiles – Output path for mapping that contains the tiles that are generated from point cloud x
out_path_mapping_tiles_to_point_cloud – Output path for mapping that contains the point clouds that are used to generate tile x
- retile_generate_grid_locally(in_path='__auto__', dimension=3, grid_size='20,20,50', offset_factor=0.0, reference_point='', out_path_tiles='__auto__', out_path_mapping_slice_point_cloud='__auto__', out_path_mapping_point_cloud_to_tiles='__auto__', out_path_mapping_tiles_to_point_cloud='__auto__', extension_in_path='.laz', extension_out_path_tiles='', extension_out_path_mapping_slice_point_cloud='.txt', extension_out_path_mapping_point_cloud_to_tiles='.txt', extension_out_path_mapping_tiles_to_point_cloud='.txt', folder_parallel_processing='__auto__')
Create grid for retileing individual point clouds
- Parameters:
in_path – folder to laz folders to be retiled
dimension – Dimension to be retiled (x,y) or (x,y,z)
grid_size – Grid size for retileing
offset_factor – Offset factor for grid generation
reference_point – Reference point for grid generation, empty for default (min_x, min_y, min_z)
out_path_tiles – Output bounding box / tiles folder
out_path_mapping_slice_point_cloud – Output folder for mapping that contains the point clouds (including neighbouring point clouds) that are used to generate slices from point cloud x
out_path_mapping_point_cloud_to_tiles – Output folder for mapping that contains the tiles that are generated from point cloud x
out_path_mapping_tiles_to_point_cloud – Output folder for mapping that contains the point clouds that are used to generate tile x
- retile_grid_to_point_cloud(in_path_grid='point_cloud_grid', in_path_mapping='__auto__', out_path_points='__auto__', extension_in_path_mapping='.txt', extension_out_path_points='.laz', folder_parallel_processing='__auto__')
retile point clouds to grid
- Parameters:
in_path_grid – folder that contains the retiled point clouds
in_path_mapping – Mapping that specifies, which point clouds of the grid intersect with the original point cloud
out_path_points – Output folder for the merged point cloud
- retile_point_cloud_to_grid(in_path_points='__auto__', in_path_grids='grid1.npy,grid2.npy,grid3.npy', out_path_points='out.laz,out2.laz', extension_in_path_points='.laz', folder_parallel_processing='__auto__')
retile point clouds to grid
- Parameters:
in_path_points – Output folder for mapping that contains the point clouds (including neighbouring point clouds) that are used to generate slices from point cloud x
in_path_grids – Output path for mapping that contains the tiles that are generated from point cloud x
out_path_points – Output path to retiled point clouds
- select_center_object(in_directory='laz_files', out_path='__auto__', latitude=1, longitude=1, extension_out_file='.laz', folder_parallel_processing='__auto__')
select center object
- Parameters:
in_directory – input directory
out_path – output folder
latitude – latitude
longitude – longitude
- select_points_by_value(path_source_in='__auto__', min_value=1, max_value=1, attribute='classification', path_source_out='__auto__', keep_empty='True', extension_file_source_in='', extension_file_source_out='', folder_parallel_processing='__auto__')
Selects points by value of attribute
- Parameters:
path_source_in – input folder data
min_value – minimum value
max_value – maximum value
attribute – feature for selection
path_source_out – output folder
keep_empty – save empty files
- uniform_down_sampling(input_path='__auto__', cols='', output_path='__auto__', every_k_points=2, extension_input_file='.laz', extension_output_file='.laz', folder_parallel_processing='__auto__')
Uniform down sampling of point cloud
- Parameters:
input_path – Input point cloud folder
cols – Columns to read from input file, default is all columns
output_path – Output point cloud folder
every_k_points – Keep every k points
- uniform_down_sampling_voxel(input_path='__auto__', cols='', output_path='__auto__', voxel_size=0.05, extension_input_file='.laz', extension_output_file='.laz', folder_parallel_processing='__auto__')
Uniform down sampling of point cloud using voxel grids
- Parameters:
input_path – Input point cloud folder
cols – Columns to read from input file, default is all columns
output_path – Output point cloud folder
voxel_size – voxel size
- uniform_downsampling(path_in='__auto__', path_out='__auto__', k=3, dtype='', extension_file_in='.laz', extension_file_out='.laz', folder_parallel_processing='__auto__')
uniform downsampling
- Parameters:
path_in – input folder data
path_out – output folder
k – k
dtype – values from point cloud, e.g. X,Y,Z
- voxel_downsampling(path_in='__auto__', path_out='__auto__', voxel_size=0.1, dtype='', extension_file_in='.laz', extension_file_out='.laz', folder_parallel_processing='__auto__')
deprecated, please use unfiorm_down_sampling_voxel instead!
- Parameters:
path_in – input folder data
path_out – output folder
voxel_size – voxel size
dtype – values from point cloud, e.g. X,Y,Z
- class qc(outer_class)
Bases:
object
Namespace for qc functions.
- report_image_completeness(in_path='__auto__', in_meta_data_path='__auto__', out_path='__auto__', grid_size=0.5, populated_class=1, small_holes_class=100, large_holes_class=255, keep_error_free='True', extension_in_file='.txt', extension_in_meta_data_file='.json', extension_out_file='.txt', folder_parallel_processing='__auto__')
report image completeness
- Parameters:
in_path – folder with count of classes
in_meta_data_path – folder with metadata
out_path – output report folder
grid_size – grid size
populated_class – populated class
small_holes_class – small holes class
large_holes_class – large holes class
keep_error_free – Save empty files?
- report_lidar_completeness(in_path='__auto__', out_path='__auto__', grid_size=0.5, populated_class=1, small_holes_class=100, large_holes_class=255, keep_error_free='True', extension_in_file='.txt', extension_out_file='.txt', folder_parallel_processing='__auto__')
report lidar completeness
- Parameters:
in_path – folder with erroneous points
out_path – output report folder
grid_size – grid size
populated_class – populated class
small_holes_class – small holes class
large_holes_class – large holes class
keep_error_free – Save empty files?
- report_qc_classification(in_path='__auto__', out_path='__auto__', error_classes='148,149', error_names='Line,Tower', keep_error_free='True', extension_in_file='.laz', extension_out_file='.txt', folder_parallel_processing='__auto__')
report qc classification
- Parameters:
in_path – folder with erroneous points
out_path – output report folder
error_classes – error classes
error_names – error names
keep_error_free – Save empty files?
- report_vegetation_occurance(in_path='__auto__', out_path='__auto__', ground_classes_old='2,3,6,7,15', ground_classes_new='1,3,9,11,15', vegetation_old='6,7,15', vegetation_new='9,11,15', keep_error_free='True', extension_in_file='.txt', extension_out_file='.txt', folder_parallel_processing='__auto__')
report vegetation occurance
- Parameters:
in_path – folder with erroneous points
out_path – output report folder
ground_classes_old – ground classes
ground_classes_new – ground classes
vegetation_old – vegetation old classes
vegetation_new – vegetation new classes
keep_error_free – Save empty files?
- class shp(outer_class)
Bases:
object
Namespace for shp functions.
- extract_multipolygons_from_shp(shp_path='__auto__', out_polygon_path='polygons/', out_attributes_path='attributes/', shape_id=-1, name_id=0, extension_shp_file='', folder_parallel_processing='__auto__')
extract multipolygons from shp
- Parameters:
shp_path – input shp folder
out_polygon_path – folder with polygons from shape file
out_attributes_path – folder with records from shape file
shape_id – id of polygon: [-1 parses all polygons]
name_id – id of polygon: [-1 ignores name]
- intersecting_polygons(input_path='__auto__', comparison_path='polygons', output_path='__auto__', extension_input_file='.pickle', extension_output_file='.txt', folder_parallel_processing='__auto__')
intersecting polygons
- Parameters:
input_path – Input folder for the polygon
comparison_path – Input folder containing polygons for comparison
output_path – Output folder for the list of intersecting polygon foldernames
- make_polygon_from_json(input_path='__auto__', output_path='__auto__', point_identifiers='min_x,min_y;min_x,max_y;max_x,max_y;max_x,min_y', extension_input_file='.json', extension_output_file='.pickle', folder_parallel_processing='__auto__')
make polygon from json
- Parameters:
input_path – Input folder for the json folder
output_path – Output folder for the polygon folder
point_identifiers – Point identifiers for the polygon
- wireframe_to_dxf(input_path='__auto__', edges_path='__auto__', output_path='__auto__', extension_input_file='.laz', extension_edges_file='.npy', extension_output_file='.dxf', folder_parallel_processing='__auto__')
wireframe to dxf
- Parameters:
input_path – Input folder for the vertices
edges_path – Input folder for the edges
output_path – Output folder for the dxf model
- class sys(outer_class)
Bases:
object
Namespace for sys functions.
- copy_file_in_cloud(target='__auto__', destination='__auto__', extension_target='', extension_destination='', folder_parallel_processing='__auto__')
copy file in cloud
- Parameters:
target – Target to be moved
destination – Destination
- create_directory_in_cloud(destination='__auto__', extension_destination='', folder_parallel_processing='__auto__')
create directory in cloud
- Parameters:
destination – Destionation location on host. default folder: ./data
- download_data_to_cloud(url='__auto__', destination='__auto__', protocol='', download_type=0, username='', password='', port=21, extension_url='', extension_destination='', folder_parallel_processing='__auto__')
download data to cloud
- Parameters:
url – URL to data
destination – Destionation location on host. default folder: ./data
protocol – protocol: : automatically try to infer protocol, ftp: ftp, sftp: sftp
download_type – download type: 0: all files from folder, 1: individual file
username – Username
password – Password
port – port
- download_from_host_to_aipha(url='__auto__', port='22', username='ubuntu', identity_path='', location='file.laz', destination='__auto__', extension_url='.1', extension_destination='.laz', folder_parallel_processing='__auto__')
Download a path from a host via ssh
- Parameters:
url – Url to host
port – Port to host
username – Username to host
identity_path – Path to identity file on aipha
location – Path to download from host
destination – Location to upload to aipha
- download_from_s3_to_aipha(access_key_id='YOUR_KEY_ID', secret_access_key='YOUR_SECRET_KEY', aws_region='eu-central-1', location='file.laz', destination='__auto__', bucket_name='Your S3 bucket', extension_destination='.laz', folder_parallel_processing='__auto__')
Download a path from a S3 bucket
- Parameters:
access_key_id – AWS access key ID
secret_access_key – AWS secret access key
aws_region – AWS region
location – Path to download from s3
destination – Location to upload to aipha
bucket_name – S3 bucket name
- find_file_paths(input_paths='__auto__', output_paths='__auto__', search_path='/search_folder', replace_in='', replace_out='', substrings='', extension_input_files='.txt', extension_output_files='.txt', folder_parallel_processing='__auto__')
find file paths
- Parameters:
input_paths – File containing the list of foldernames
output_paths – Path to save the modified folderlist
search_path – Folder to traverse for finding files
replace_in – The part to replace in the filenames
replace_out – The new part to replace with
substrings – a list of substrings that need to occure in the file paths to be vallid
- list_files_in_cloud(target='__auto__', path_out='__auto__', extension_target='', extension_file_out='.txt', folder_parallel_processing='__auto__')
list files in cloud
- Parameters:
target – Target to be listet
path_out – output_folder
- move_file_in_cloud(target='__auto__', destination='__auto__', extension_target='', extension_destination='', folder_parallel_processing='__auto__')
move file in cloud
- Parameters:
target – Target to be moved
destination – Destination
- recursive_list(target='__auto__', destination='__auto__', extension_target='', extension_destination='.txt', folder_parallel_processing='__auto__')
recursive list
- Parameters:
target – Target folder to be listed recursively
destination – Output folder
- remove_files_from_cloud(target='__auto__', extension_target='', folder_parallel_processing='__auto__')
remove files from cloud
- Parameters:
target – Target to be deleted
- rename_file_in_cloud(target='__auto__', prefix='', suffix='', replace_from='', replace_to='', replace_count=0, extension_target='', folder_parallel_processing='__auto__')
rename file in cloud
- Parameters:
target – Target to be renamed
prefix – add prefix
suffix – add suffix
replace_from – replace string in filename
replace_to – replace string in filename
replace_count – replace string in filename
- select_by_identifier(original_path='original_folder', original_identifier_path='__auto__', output_path='output_folder', extension_original_identifier_file='.txt', folder_parallel_processing='__auto__')
select by identifier
- Parameters:
original_path – original folder
original_identifier_path – original identifiers
output_path – output folder
- select_corresponding_path(original_path='__auto__', original_identifier_path='__auto__', corresponding_path='__auto__', output_path='__auto__', selection_criteria='oldest', default_value='__original__', extension_original_file='.txt', extension_original_identifier_file='.txt', extension_corresponding_file='.txt', extension_output_file='.txt', folder_parallel_processing='__auto__')
select corresponding path
- Parameters:
original_path – original folders
original_identifier_path – original identifiers
corresponding_path – corresponding folders
output_path – output folder
selection_criteria – selection criteria: [oldest, newest, shortest, longest]
default_value – default value if no corresponding path is found
- split_path(in_path='__auto__', out_path='__auto__', split_type='filename', extension_in_path='', extension_out_path='', folder_parallel_processing='__auto__')
split path
- Parameters:
in_path – input folder
out_path – output folder
split_type – split type: [filename, dirname, basename, ext]
- touch_file_in_cloud(target='__auto__', extension_target='.txt', folder_parallel_processing='__auto__')
touch file in cloud
- Parameters:
target – File to be touched
- upload_data_from_cloud(url='__auto__', target='__auto__', protocol='', username='', password='', port=21, extension_url='', extension_target='', folder_parallel_processing='__auto__')
upload data from cloud
- Parameters:
url – destination URL
target – Target location on host for upload. default folder: ./data
protocol – protocol: : automatically try to infer protocol, ftp: ftp, sftp: sftp
username – Username
password – Password
port – port
- upload_from_aipha_to_host(url='__auto__', port='22', username='ubuntu', identity_path='', target='__auto__', location='file.laz', extension_url='.1', extension_target='.laz', folder_parallel_processing='__auto__')
Upload a path to a host via ssh
- Parameters:
url – Url to host
port – Port to host
username – Username to host
identity_path – Path to identity file on aipha
target – Path to upload from aipha
location – Location of file to upload on host
- upload_from_aipha_to_s3(access_key_id='YOUR_KEY_ID', secret_access_key='YOUR_SECRET_KEY', aws_region='eu-central-1', target='__auto__', location='file.laz', bucket_name='Your S3 bucket', extension_target='.laz', folder_parallel_processing='__auto__')
Upload a path to a S3 bucket
- Parameters:
access_key_id – AWS access key ID
secret_access_key – AWS secret access key
aws_region – AWS region
target – Path to upload from aipha
location – Location of file to upload on s3
bucket_name – S3 bucket name
- class tdp(outer_class)
Bases:
object
Namespace for tdp functions.
- convert_laz_point_formats(path_in='__auto__', path_out='__auto__', format=7, extension_file_in='.laz', extension_file_out='.labels', folder_parallel_processing='__auto__')
convert laz point formats
- Parameters:
path_in – input folder
path_out – results folder
format – format
- merge_and_split_results_csv(new_tower_path='new_paths.txt', last_tower_path='last_paths.txt', reference_tower_path='reference_paths.txt', results_path_csv='results.csv', results_plots_path='results_plots', merged_results_path_csv='results/Reports_2023', resturctured_plots_path='results/10-Plots-Tragwerke', input_path_structure_path='input_file_structure.txt', year='2023', folder_parallel_processing='__auto__')
[atr] Merge results csv
- Parameters:
new_tower_path – input new path data
last_tower_path – input last path
reference_tower_path – input reference path
results_path_csv – input results.csv path
results_plots_path – input results_plots path
merged_results_path_csv – output path
resturctured_plots_path – output path
input_path_structure_path – input file structure path
year – year
- point_cloud_classification_inference(path_in='__auto__', path_out='__auto__', model_path='network_parameters', cols_data='X,Y,Z', cols_labels='classification', extension_file_in='.laz', extension_file_out='.labels', folder_parallel_processing='__auto__')
point cloud classification inference
- Parameters:
path_in – input folder
path_out – results folder
model_path – path to model
cols_data – attributes used
cols_labels – label name
- point_cloud_filter_label_noise(path_in_data='__auto__', path_in_labels='__auto__', path_out='__auto__', k_nearest_neighbours=5, sigma=10.0, dim=3, invalid_label=0, extension_file_in_data='.laz', extension_file_in_labels='.labels', extension_file_out='.laz', folder_parallel_processing='__auto__')
point cloud filter label noise
- Parameters:
path_in_data – input folder data
path_in_labels – input folder labels
path_out – output folder
k_nearest_neighbours – k nearest neighbours
sigma – sigma
dim – dim
invalid_label – invalid class label
- segment_objects(in_points_path='__auto__', in_labels_path='__auto__', out_directory='segmented_object', out_prefix='object', label_col='classification', object_class=68, max_distance=2, min_points=100, extension_in_points_file='', extension_in_labels_file='', folder_parallel_processing='__auto__')
segment objects
- Parameters:
in_points_path – input folder points
in_labels_path – input folder labels
out_directory – output directory
out_prefix – output filename prefix
label_col – label column id
object_class – obejct class
max_distance – maximum distance for segmentation
min_points – minimum number of points
- tower_displacement(laz_in_path_new='__auto__', laz_in_path_old='__auto__', laz_in_path_ref='__auto__', tower_name='', year_new='2022', year_old='2020', year_ref='2018', results_out_path='__auto__', plots_out_path='plots/', extension_laz_in_file_new='.laz', extension_laz_in_file_old='.laz', extension_laz_in_file_ref='.laz', extension_results_out_file='.txt', folder_parallel_processing='__auto__')
tower displacement
- Parameters:
laz_in_path_new – laz input folder new data
laz_in_path_old – laz input folder last data
laz_in_path_ref – laz input folder first data
tower_name – tower name
year_new – year of new data
year_old – year of old data
year_ref – year of reference data
results_out_path – result folder folder
plots_out_path – result folder path
- class val(outer_class)
Bases:
object
Namespace for val functions.
- add_constant(inpath='__auto__', outpath='__auto__', dtype='float', constant=0.0, extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Add a constant value to a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
constant – Constant value to add (default: 0.0)
- argmax(inpath='__auto__', outpath='__auto__', dtype='float', axis=-1, extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Argmax of a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
axis – Axis to find the argmax (default: None)
- argmin(inpath='__auto__', outpath='__auto__', dtype='float', axis=-1, extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Argmin of a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
axis – Axis to find the argmax (default: None)
- connected_components_labeling(pathname_in='__auto__', pathname_out='__auto__', dtype='float', no_type=0.0, value=1.0, extension_filename_in='.npy', extension_filename_out='.npy', folder_parallel_processing='__auto__')
Perform connected components labeling on a matrix.
- Parameters:
pathname_in – Input folder folder for the matrix
pathname_out – Output folder folder for the labeled matrix
dtype – Data type of the matrix (default: float)
no_type – Value representing no_type in the matrix (default: 0.0)
value – Value representing value in the matrix (default: 1.0)
- count_unique_values(pathname_in='__auto__', pathname_out='__auto__', dtype='float', ignore='nan', extension_filename_in='.npy', extension_filename_out='.npy', folder_parallel_processing='__auto__')
Count unique occurrences of values in a matrix.
- Parameters:
pathname_in – Input folder folder for the matrix
pathname_out – Output folder folder for the unique counts matrix
dtype – Data type of the matrix (default: float)
ignore – Data value to ignore
- divide_constant(inpath='__auto__', outpath='__auto__', dtype='float', constant=1.0, extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Divide a constant value from a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
constant – Constant value to divide (default: 1.0)
- equal_constant(inpath='__auto__', outpath='__auto__', dtype='float', constant=1, extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Equal operator on a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
constant – Value to compare (default: 1)
- greater_constant(inpath='__auto__', outpath='__auto__', dtype='float', constant=1, extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Greater operator on a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
constant – Value to compare (default: 1)
- greater_equal_constant(inpath='__auto__', outpath='__auto__', dtype='float', constant=1, extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Greater equal operator on a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
constant – Value to compare (default: 1)
- hstack(path_values_in='__auto__', path_values_out='__auto__', dtype='str', extension_file_values_in='.npy', extension_file_values_out='.npy', folder_parallel_processing='__auto__')
- Parameters:
path_values_in – input folder [.npy, .labels or .txt]
path_values_out – output folder [.txt or .npy]
dtype – data type
- less_constant(inpath='__auto__', outpath='__auto__', dtype='float', constant=1, extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Less operator on a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
constant – Value to compare (default: 1)
- less_equal_constant(inpath='__auto__', outpath='__auto__', dtype='float', constant=1, extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Less equal operator on a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
constant – Value to compare (default: 1)
- mask_subset(path_values1_in='__auto__', path_mask_in='__auto__', path_values_out='__auto__', extension_file_values1_in='.npy', extension_file_mask_in='.npy', extension_file_values_out='.txt', folder_parallel_processing='__auto__')
mask subset
- Parameters:
path_values1_in – input folder [.txt or .npy]
path_mask_in – input folder that contains [0,1] values
path_values_out – output folder [.txt or .npy]
- masked_assign_constant(path_values_in='__auto__', constant=0.0, path_mask_in='__auto__', path_values_out='__auto__', extension_file_values_in='.npy', extension_file_mask_in='.npy', extension_file_values_out='.npy', folder_parallel_processing='__auto__')
masked assign constant
- Parameters:
path_values_in – input folder [.txt or .npy]
constant – constant value to assign
path_mask_in – input folder that contains [0,1] values
path_values_out – output folder [.txt or .npy]
- max(inpath='__auto__', outpath='__auto__', dtype='float', extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Maximum of a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
- min(inpath='__auto__', outpath='__auto__', dtype='float', extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Minimum of a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
- multiply_constant(inpath='__auto__', outpath='__auto__', dtype='float', constant=1.0, extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Multiply a constant value from a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
constant – Constant value to multiply (default: 1.0)
- not_equal_constant(inpath='__auto__', outpath='__auto__', dtype='float', constant=1, extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Not equal operator on a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
constant – Value to compare (default: 1)
- remap_values(path_values_in='__auto__', path_values_out='__auto__', map_in='1,2,3,4', map_out='3,1,2,2', dtype_in='int32', dtype_out='int32', unmapped='0', extension_file_values_in='.npy', extension_file_values_out='.npy', folder_parallel_processing='__auto__')
remap values
- Parameters:
path_values_in – input folder [.txt, .labels or .npy]
path_values_out – output folder [.txt, .labels or .npy]
map_in – map in
map_out – map out
dtype_in – data type input
dtype_out – data type output
unmapped – default value for values where no mapping exists
- replace_in_string_array(path_in='__auto__', path_out='__auto__', tokens_to_replace='', replacement_tokens='', extension_file_in='.txt', extension_file_out='.txt', folder_parallel_processing='__auto__')
Replace tokens in a string array
- Parameters:
path_in – Input foldername
path_out – Output foldername
tokens_to_replace – Tokens to replace, comma separated
replacement_tokens – Replacement tokens, comma separated
- replace_strings(path_in='__auto__', path_out='__auto__', replace_from='', replace_to='', extension_file_in='.txt', extension_file_out='.txt', folder_parallel_processing='__auto__')
Substrings replacement in an ASCII file
- Parameters:
path_in – Path to the input folder
path_out – Path to the output folder
replace_from – Comma-separated list of substrings to replace
replace_to – Comma-separated list of replacement substrings
- resize_slice_matrix(pathname_in='__auto__', pathname_out='__auto__', dtype='float', indices=':,124:,:3', default_value=0.0, extension_filename_in='.npy', extension_filename_out='.npy', folder_parallel_processing='__auto__')
Resize and slice a matrix based on indices.
- Parameters:
pathname_in – Input folder folder for the matrix
pathname_out – Output folder folder for the resized and sliced matrix
dtype – Data type of the matrix (default: float)
indices – Indices to slice the matrix (in NumPy slicing convention)
default_value – Default value to fill when resizing (default: 0.0)
- slice_string_array(path_in='__auto__', path_out='__auto__', slices='', extension_file_in='.txt', extension_file_out='.txt', folder_parallel_processing='__auto__')
Slice a string array
- Parameters:
path_in – Input foldername
path_out – Output foldername
slices – Slices to take
- sliced_assign_constant(path_in='__auto__', path_out='__auto__', indices=':', constant=0.0, extension_file_in='.npy', extension_file_out='.txt', folder_parallel_processing='__auto__')
sliced assign constant
- Parameters:
path_in – input folder [.txt or .npy]
path_out – output folder [.txt or .npy]
indices – indices to slice
constant – constant value to assign
- subtract_constant(inpath='__auto__', outpath='__auto__', dtype='float', constant=0.0, extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Subtract a constant value from a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
constant – Constant value to subtract (default: 0.0)
- sum(inpath='__auto__', outpath='__auto__', dtype='float', axis=-1, extension_infile='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Sum all values of a matrix.
- Parameters:
inpath – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
axis – axis to sum [default -1: no axis is used]
- values_add(path_values1_in='__auto__', path_values2_in='__auto__', path_values_out='__auto__', ignore_label=nan, value_subset1=nan, extension_file_values1_in='.npy', extension_file_values2_in='.npy', extension_file_values_out='.npy', folder_parallel_processing='__auto__')
values add
- Parameters:
path_values1_in – input folder [.npy, .labels or .txt]
path_values2_in – input folder [.npy, .labels or .txt]
path_values_out – output folder [.txt or .npy]
ignore_label – ignore value default: nan
value_subset1 – ignore value default: nan
- values_assign(path_values1_in='__auto__', path_values2_in='__auto__', path_values_out='__auto__', ignore_label=nan, value_subset1=nan, extension_file_values1_in='.npy', extension_file_values2_in='.npy', extension_file_values_out='.npy', folder_parallel_processing='__auto__')
values assign
- Parameters:
path_values1_in – input folder [.npy or .txt]
path_values2_in – input folder [.npy or .txt]
path_values_out – output folder [.txt or .npy]
ignore_label – ignore value default: nan
value_subset1 – ignore value default: nan
- values_distance(pathname_is='__auto__', pathname_should='__auto__', output_path='__auto__', dtype='float', no_type=0.0, value=1.0, gridsize=1.0, extension_filename_is='.npy', extension_filename_should='.npy', extension_output_file='.npy', folder_parallel_processing='__auto__')
Compute Euclidean distance from is matrix to should matrix.
- Parameters:
pathname_is – Input folder folder for is matrix
pathname_should – Input folder folder for should matrix
output_path – Output folder folder for distances matrix
dtype – Data type of the matrices (default: float)
no_type – Value representing no_type in the matrices (default: 0.0)
value – Value representing value in the matrices (default: 1.0)
gridsize – Resolution of the spatial grid in meters (default: 1.0)
- values_divide(path_values1_in='__auto__', path_values2_in='__auto__', path_values_out='__auto__', ignore_label=nan, value_subset1=nan, extension_file_values1_in='.npy', extension_file_values2_in='.npy', extension_file_values_out='.npy', folder_parallel_processing='__auto__')
values divide
- Parameters:
path_values1_in – input folder [.npy or .txt]
path_values2_in – input folder [.npy or .txt]
path_values_out – output folder [.txt or .npy]
ignore_label – ignore value default: nan
value_subset1 – ignore value default: nan
- values_equal(path_values1_in='__auto__', path_values2_in='__auto__', path_values_out='__auto__', ignore_label=nan, value_subset1=nan, extension_file_values1_in='.npy', extension_file_values2_in='.npy', extension_file_values_out='.npy', folder_parallel_processing='__auto__')
values equal
- Parameters:
path_values1_in – input folder [.npy or .txt]
path_values2_in – input folder [.npy or .txt]
path_values_out – output folder [.txt or .npy]
ignore_label – ignore value default: nan
value_subset1 – ignore value default: nan
- values_greater(inpath1='__auto__', inpath2='__auto__', outpath='__auto__', dtype='float', extension_infile1='.npy', extension_infile2='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Elementwiese greater operator on a matrix.
- Parameters:
inpath1 – Input folder folder
inpath2 – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
- values_greater_equal(inpath1='__auto__', inpath2='__auto__', outpath='__auto__', dtype='float', extension_infile1='.npy', extension_infile2='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Elementwiese greater equal operator on a matrix.
- Parameters:
inpath1 – Input folder folder
inpath2 – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
- values_hstack(path_values1_in='__auto__', path_values2_in='__auto__', path_values_out='__auto__', dtype='str', extension_file_values1_in='.npy', extension_file_values2_in='.npy', extension_file_values_out='.npy', folder_parallel_processing='__auto__')
values hstack
- Parameters:
path_values1_in – input folder [.npy, .labels or .txt]
path_values2_in – input folder [.npy, .labels or .txt]
path_values_out – output folder [.txt or .npy]
dtype – data type
- values_less(inpath1='__auto__', inpath2='__auto__', outpath='__auto__', dtype='float', extension_infile1='.npy', extension_infile2='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Elementwiese less operator on a matrix.
- Parameters:
inpath1 – Input folder folder
inpath2 – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
- values_less_equal(inpath1='__auto__', inpath2='__auto__', outpath='__auto__', dtype='float', extension_infile1='.npy', extension_infile2='.npy', extension_outfile='.npy', folder_parallel_processing='__auto__')
Elementwiese less equal operator on a matrix.
- Parameters:
inpath1 – Input folder folder
inpath2 – Input folder folder
outpath – Output folder folder
dtype – Data type of the matrix (default: float)
- values_masked_assign(path_values1_in='__auto__', path_values2_in='__auto__', path_mask_in='__auto__', path_values_out='__auto__', extension_file_values1_in='.npy', extension_file_values2_in='.npy', extension_file_mask_in='.npy', extension_file_values_out='.txt', folder_parallel_processing='__auto__')
values masked assign
- Parameters:
path_values1_in – input folder [.txt or .npy]
path_values2_in – input folder [.txt or .npy]
path_mask_in – input folder that contains [0,1] values
path_values_out – output folder [.txt or .npy]
- values_multiply(path_values1_in='__auto__', path_values2_in='__auto__', path_values_out='__auto__', ignore_label=nan, value_subset1=nan, extension_file_values1_in='.npy', extension_file_values2_in='.npy', extension_file_values_out='.npy', folder_parallel_processing='__auto__')
values multiply
- Parameters:
path_values1_in – input folder [.npy or .txt]
path_values2_in – input folder [.npy or .txt]
path_values_out – output folder [.txt or .npy]
ignore_label – ignore value default: nan
value_subset1 – ignore value default: nan
- values_not_equal(path_values1_in='__auto__', path_values2_in='__auto__', path_values_out='__auto__', ignore_label=nan, value_subset1=nan, extension_file_values1_in='.npy', extension_file_values2_in='.npy', extension_file_values_out='.npy', folder_parallel_processing='__auto__')
values not equal
- Parameters:
path_values1_in – input folder [.npy or .txt]
path_values2_in – input folder [.npy or .txt]
path_values_out – output folder [.txt or .npy]
ignore_label – ignore value default: nan
value_subset1 – ignore value default: nan
- values_sliced_assign(path_in='__auto__', path_out='__auto__', indices=':', path_values_in='__auto__', default_value=0.0, extension_file_in='.npy', extension_file_out='.txt', extension_file_values_in='.npy', folder_parallel_processing='__auto__')
values sliced assign
- Parameters:
path_in – input folder [.txt or .npy]
path_out – output folder [.txt or .npy]
indices – indices to slice
path_values_in – input folder [.txt or .npy]
default_value – default value to assign
- values_subtract(path_values1_in='__auto__', path_values2_in='__auto__', path_values_out='__auto__', ignore_label=nan, value_subset1=nan, extension_file_values1_in='.npy', extension_file_values2_in='.npy', extension_file_values_out='.npy', folder_parallel_processing='__auto__')
values subtract
- Parameters:
path_values1_in – input folder [.npy or .txt]
path_values2_in – input folder [.npy or .txt]
path_values_out – output folder [.txt or .npy]
ignore_label – ignore value default: nan
value_subset1 – ignore value default: nan
- vstack(path_values_in='__auto__', path_values_out='__auto__', dtype='str', extension_file_values_in='.npy', extension_file_values_out='.npy', folder_parallel_processing='__auto__')
- Parameters:
path_values_in – input folder [.npy, .labels or .txt]
path_values_out – output folder [.txt or .npy]
dtype – data type