Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference on custom dataset ( point cloud file in the same format of SemanticKitty (.bin) #42

Open
DavideCoppola97 opened this issue Aug 17, 2021 · 2 comments

Comments

@DavideCoppola97
Copy link

Hello and thanks for your work!
I generated my dataset using a simulator, trying to create point clouds as similar as possible to those of SemanticKitty, then with a 64 beam velodyne and trying to use the same fov parameters and number of points.
The problem is that I only have the .bin files but I don't have the labels, I would simply like to use your network to make inference on my points and generate the labels, what can I do? Thank you

@edwardzhou130
Copy link
Owner

You should be able to generate the label predictions using the same testing script if you change the test data path to your custom dataset:

# test
print('*'*80)
print('Generate predictions for test split')
print('*'*80)
pbar = tqdm(total=len(test_dataset_loader))
with torch.no_grad():
for i_iter_test,(_,_,test_grid,_,test_pt_fea,test_index) in enumerate(test_dataset_loader):
# predict
test_pt_fea_ten = [torch.from_numpy(i).type(torch.FloatTensor).to(pytorch_device) for i in test_pt_fea]
test_grid_ten = [torch.from_numpy(i[:,:2]).to(pytorch_device) for i in test_grid]
predict_labels = my_model(test_pt_fea_ten,test_grid_ten)
predict_labels = torch.argmax(predict_labels,1).type(torch.uint8)
predict_labels = predict_labels.cpu().detach().numpy()
# write to label file
for count,i_test_grid in enumerate(test_grid):
test_pred_label = predict_labels[count,test_grid[count][:,0],test_grid[count][:,1],test_grid[count][:,2]]
test_pred_label = train2SemKITTI(test_pred_label)
test_pred_label = np.expand_dims(test_pred_label,axis=1)
save_dir = test_pt_dataset.im_idx[test_index[count]]
_,dir2 = save_dir.split('/sequences/',1)
new_save_dir = output_path + '/sequences/' +dir2.replace('velodyne','predictions')[:-3]+'label'
if not os.path.exists(os.path.dirname(new_save_dir)):
try:
os.makedirs(os.path.dirname(new_save_dir))
except OSError as exc:
if exc.errno != errno.EEXIST:
raise
test_pred_label = test_pred_label.astype(np.uint32)
test_pred_label.tofile(new_save_dir)
pbar.update(1)
del test_grid,test_pt_fea,test_index
pbar.close()

However, the quality of the prediction might not be very good. I suggest you can first visualize 5-10 random predictions to see if it meets your need.

@DavideCoppola97
Copy link
Author

You should be able to generate the label predictions using the same testing script if you change the test data path to your custom dataset:

# test
print('*'*80)
print('Generate predictions for test split')
print('*'*80)
pbar = tqdm(total=len(test_dataset_loader))
with torch.no_grad():
for i_iter_test,(_,_,test_grid,_,test_pt_fea,test_index) in enumerate(test_dataset_loader):
# predict
test_pt_fea_ten = [torch.from_numpy(i).type(torch.FloatTensor).to(pytorch_device) for i in test_pt_fea]
test_grid_ten = [torch.from_numpy(i[:,:2]).to(pytorch_device) for i in test_grid]
predict_labels = my_model(test_pt_fea_ten,test_grid_ten)
predict_labels = torch.argmax(predict_labels,1).type(torch.uint8)
predict_labels = predict_labels.cpu().detach().numpy()
# write to label file
for count,i_test_grid in enumerate(test_grid):
test_pred_label = predict_labels[count,test_grid[count][:,0],test_grid[count][:,1],test_grid[count][:,2]]
test_pred_label = train2SemKITTI(test_pred_label)
test_pred_label = np.expand_dims(test_pred_label,axis=1)
save_dir = test_pt_dataset.im_idx[test_index[count]]
_,dir2 = save_dir.split('/sequences/',1)
new_save_dir = output_path + '/sequences/' +dir2.replace('velodyne','predictions')[:-3]+'label'
if not os.path.exists(os.path.dirname(new_save_dir)):
try:
os.makedirs(os.path.dirname(new_save_dir))
except OSError as exc:
if exc.errno != errno.EEXIST:
raise
test_pred_label = test_pred_label.astype(np.uint32)
test_pred_label.tofile(new_save_dir)
pbar.update(1)
del test_grid,test_pt_fea,test_index
pbar.close()

However, the quality of the prediction might not be very good. I suggest you can first visualize 5-10 random predictions to see if it meets your need.

Thanks even if I was already able to make the inference, both on this and on other networks (RangeNet ++, SqueeSegV3 etc.). Unfortunately the results are really poor, probably due to the fact that my data are obtained from a simulator (Carla) which in turn simulates a lidar, the networks cannot map correctly even the road, as soon as available I will try to do the end tuning to try to improve the performance of the networks on my pointclouds

@DavideCoppola97 DavideCoppola97 changed the title Inference on custom dataset ( point cloud file in the same formato of SemanticKitty (.bin) Inference on custom dataset ( point cloud file in the same format of SemanticKitty (.bin) Sep 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants