You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, you are encountering this problem because there is a bug in the original code located in ./ml3d/torch/modules/losses/semseg_loss.py. The issue arises because the code expects a tensor with 8 values (num of valid classes), but you are passing a list of tensors instead. You should use .squeeze() to fix this. Here is my corrected code (line 40):"
`
I am training the toronto3d dataset on the Randlanet model.But on training I am getting this error
RuntimeError: weight tensor should be defined either for all 8 classes or no classes but got weight tensor of shape: [1, 8]
I have not changed anything from the code and config file. What Could be the problem
I will attach the config file and training code
Config
dataset:
name: Toronto3D
cache_dir: ./logs/cache
class_weights: [41697357, 1745448, 6572572, 19136493, 674897, 897825, 4634634, 374721]
ignored_label_inds:
num_classes: 8
num_points: 65536
test_files:
test_result_folder: ./test
train_files:
use_cache: true
val_files:
steps_per_epoch_train: 100
steps_per_epoch_valid: 10
model:
name: RandLANet
batcher: DefaultBatcher
ckpt_path: ./logs/randlanet_toronto3d_202201071330utc.pth
num_neighbors: 16
num_layers: 5
num_points: 65536
num_classes: 8
ignored_label_inds: [0]
sub_sampling_ratio: [4, 4, 4, 4, 2]
in_channels: 6
dim_features: 8
dim_output: [16, 64, 128, 256, 512]
grid_size: 0.05
augment:
recenter:
dim: [0, 1, 2]
normalize:
points:
method: linear
pipeline:
name: SemanticSegmentation
optimizer:
lr: 0.001
batch_size: 2
main_log_dir: ./logs
max_epoch: 200
save_ckpt_freq: 5
scheduler_gamma: 0.99
test_batch_size: 1
train_sum_dir: train_log
val_batch_size: 2
summary:
record_for: []
max_pts:
use_reference: false
max_outputs: 1
Training Code
The text was updated successfully, but these errors were encountered: