Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different semantic results of suma++ #6

Closed
TT22TY opened this issue Nov 20, 2019 · 9 comments
Closed

Different semantic results of suma++ #6

TT22TY opened this issue Nov 20, 2019 · 9 comments

Comments

@TT22TY
Copy link

TT22TY commented Nov 20, 2019

Hello,

Thanks for your work! After I build the suma++, I just obtain the result like this, I use the pretrained model provided on the webpage, however, it seems that the semantic result is quite different from the picture you provided. I am wondering why it happened( Is it due to the pretrained model used?). I try to use the trained model which is trained from scratch by myself, it works well using the infer.py of RangeNet++, the result is also strange. Besides, the result of RangeLib are also different from the picture you provided.
Looking forward to your reply. :) Thanks!

image

@Chen-Xieyuanli
Copy link
Member

Hey,

Thanks for using our code.

It looks like a problem of the semantic segmentation part, and the SLAM part works. You could visualize the semantic segmentation results by checking the box of "Show semantic map"

Several reasons may cause this problem.

  1. Our rangenet_lib and the provided pretrained model can now only work well with KITTI LiDAR data. It may provide wrong predictions when you test with other LiDAR data.
  2. The runtime tensorrt model will be generated depending on your hardware. It may not work with other versions of the graphic cards or drivers. Could you please provide more information about your hardware setups. We may also have a test later.

@TT22TY

This comment has been minimized.

@Chen-Xieyuanli
Copy link
Member

The problem should come from the rangenet_lib, since the semantic segmentation input of SuMa++ is not correct.

Have you already tried the example demo of rangenet_lib? The semantic segmentation result visualized in the "Show semantic map" should be the same as that of rangenet_lib.

You may test and make sure the rangnet_lib works first. If the example demo of rangenet_lib works well, SuMa++ should also work, and vice versa.

@TT22TY

This comment has been minimized.

@Chen-Xieyuanli
Copy link
Member

Hi @TT22TY,
Since the problem comes from rangenet_lib and you also opened a new issue there, I'm going to close this one.

@Chen-Xieyuanli
Copy link
Member

Hi @TT22TY

The problem seems caused by the incompatibility of our code with TensorRT version 6.

For more details please find here: PRBonn/rangenet_lib#9.

@TT22TY

This comment has been minimized.

@Chen-Xieyuanli Chen-Xieyuanli reopened this Dec 5, 2019
@Chen-Xieyuanli
Copy link
Member

Hi @TT22TY ,

I've listed the tested setups in PRBonn/rangenet_lib#9.

To keep this issue in one track, I am going to close this one again.

@G12311231
Copy link

Hey,

Thanks for using our code.

It looks like a problem of the semantic segmentation part, and the SLAM part works. You could visualize the semantic segmentation results by checking the box of "Show semantic map"

Several reasons may cause this problem.

  1. Our rangenet_lib and the provided pretrained model can now only work well with KITTI LiDAR data. It may provide wrong predictions when you test with other LiDAR data.
  2. The runtime tensorrt model will be generated depending on your hardware. It may not work with other versions of the graphic cards or drivers. Could you please provide more information about your hardware setups. We may also have a test later.

Hi,
If I use the same sensor vlp-64 to collect data on my campus, I wonder if this dataset will be able to use the parameters of the pre-trained model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants