Cepton
Cepton

Simulation

Part of the fun working in the lidar space is that it opens up many adjacent projects, and as part of our ongoing perception work, we are also creating tools for simulation and data collection.

Consider all of these simulations to be unofficial, and please refer to the company webpage for official updates!

Update March 19

Happy first update!

Let's take a general look at the simulation project. We break it down into 3 main components.

Yes But! How do we get the point cloud?

This part is fun - because we want the simulation to run in realtime, obviously we need this to run on GPU. So we're going to flex the power of our shaders. We create a second camera, positioned on the roof of the car you're seeing below, just above the windshield.

We attach a texture to render things to, but instead of writing colors, we're going to write the real-world position of each fragment! Because the fragment shader interpolates between vertices, by feeding the vertex positions into the fragment shader, we can get the world coordinate of every pixel you're seeing on the screen.

We then generate the firing angles of our lidar (in image coordinates), and map these on to the world-position texture in order to tell which positions are hit by our scan.

At this point, we have everything we need, so the final step is to render everything - point cloud and objects, into a final display you're seeing below.

Dev Links

The simulator and content server are written in rust. Having rust's compile-time checking speeds up development a ton, and even more so with a graphics project like this.

All rendering is done with wgpu. It's an awesome graphics API and I'd highly recommend trying it out.