In the last few years, deep learning semantic segmentation has quickly become state of the art for image classification. The peculiarity of this technique is that it allows us to characterize an image pixel-wise.
In this talk, we present a visual-based deep learning semantic segmentation technique to identify hazardous territorial features on the Moon. Thanks to new ray-tracing softwares, we were able to generate photorealistic images, and use them for training a UNet architecture to recognize craters and steep terrains during a power descent trajectory. Then at each iteration, the algorithm takes the classified image and selects the safest landing site based on the distance of the nearest hazard and margin of maneuver of the lander.