-
Notifications
You must be signed in to change notification settings - Fork 71
Points to Lidar Channels #60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
clarifications as I investigate this further. In the notebook above I have applied the lidar_to_ego transform. However, from what I can see the 0,0,0 point of the un transformed lidar points from the spinning sensor are on the ground... which implies something funny is going on :) |
Idea! I can use the timestamp of the points to group them into columns and more easily understand which point is which channel! Right! Shall let you know if that works |
That works. I will implement a function to compute column and row later. @nisseknudsen |
Just an update on this, I couldn't quite get the timestamp grouping approach to work ... The issue many timestamp groups are missing returns ... So while you can always get an order of a group of timestamps there's no reliable way to get a one to one correspondence with a row. @xpchuan-95 if there is a way to implement the row/col mapping of a point I'd be happy to contribute if you can point me in the right direction.. I'm scratching my head on this one but I think without the transform to the middle of the lidar scan I'm not sure how I could do this :). I'd love to hear your ideas |
hmm looking at the hesai64 docs (and some hesai ros sourcecode here) it looks like there is a direct mapping from the timestamp to a channel, however:
sooo this leads me back to my original "yaw == channel" idea :D but that necessitates knowing the actual lidar transform :) which i do not know... sadness. |
I have checked my resource, it will take some time to get laser id, so what do you use laser id for? for generating range map? |
Thanks! Sad to hear it will take a while but look forward to it :). In the mean time I'm working on a registration based approach to at least get the true lidar position. We're aiming to do some domain adaptation experiments for which the channel would be useful to create some baselines |
extrinsic_calibration.txt But in fact the points in pandaset is compensated by moving compensation. Using above way can't get exact raw pointcloud, but a similar pointcloud. |
ah hah! :D interesting! yes i think i can use that... course if the pre-motion compensated frames could be used to assign a point to a channel that would be more accurate ... I understand that'd be tricky to get into your guys pipelines but I'd be super appreciative of it so I suppose treat this as a vote towards that making its way into your guys sprints :) ... but yeah this extrinsic transforms might just be enough (and probably better than the registration I could cook up ;)) |
Also we make it to get laser id, it is from source data and accuracy, but only for Pandar64, so you need to set_sensor(0). |
!! Nice that was fast! Great, can I have access to the drive? I've requested access. Thank you so much for the effort, I'll investigate this as soon as I have access |
you can't access google driver link? it's public. |
The link works now, I'll play with it this evening, thanks again! |
Victory! Thank you @xpchuan-95 ! :) |
Great! |
Hey guys,
Is there a way to map lidar points to the lidar channel that produced them? Specifically those channels reported here: https://hesaiweb2019.blob.core.chinacloudapi.cn/uploads/Pandar64_User's_Manual.pdf
I've tried something simple along the lines of:
theta = np.arctan2(points[..., 2], points[..., 1])
but the visualisations don't look quite right when i colour points red
> 3 * np.pi / 180
and colour points yellow<3 * np.pi/180
. According to the hesai data sheet I was expecting to see 4 distinct bands but that didn't work out :) attached is what I see (also included a green rectangle whose corner is (0, 0, 0) which is 100 units longMy impression is that the provided lidar pose is in fact the baselink on the vehicle as opposed to the center of the lidar which is in turn making my "find the lidar channel" logic work incorrectly. The reason I think this is because if I add around 2m/3m to the "lidar_to_ego" corrected points I get the following:

But of course "about 3m" isn't quite the whole story because the lidary unit has a little tilt as well :) I guess what I need is the transform from baselink to the lidar sensor?
I've also attached the notebook I used to produce the images
view.tar.gz
Any pointers?
The text was updated successfully, but these errors were encountered: