Vision-based Neural Scene Representations for Spacecraft dataset

From MaRDI portal
Dataset:6696037



DOI10.5281/zenodo.4701174Zenodo4701174MaRDI QIDQ6696037FDOQ6696037

Dataset published at Zenodo repository.

Lecuyer Gurvan, Mergy Anne, Dario Izzo, Derksen Dawa

Publication date: 19 April 2021

Copyright license: Creative Commons Attribution 4.0 International



Real space datasets are scarce and often come with limited metadata to supervise the training of learning algorithms. Using synthetic data allows us to produce a large dataset in a controlled environment which eases the production of annotations. We generate the data with the 3D engine Unity using models of two different satellites: a CubeSat and the Soil Moisture and Ocean Salinity (SMOS) satellite. CubeSat is a small satellite based on a 3U CubeSat platform. It is a rectangular cuboid shaped of 0.3 x 0.3 x 0.9 m. Its main structure is made of aluminum and black PCB panels on its sides. For this satellite model, we place the camera at 1 meter to render the datasets images. The near and far bounds are fixed at 0.1 m and 2 m. SMOS has a more complicated and elongated shape. The main platform has a cubic shape of 0.9 x 0.9 x 1.0 m with solar panels attached on two sides, each 6.5 m long. The payload is a 3-branch antenna of 3 meters each placed at 60 degrees. The structure is covered by golden and silvered foils, which are highly reflective materials. For this satellite model, we place the camera at 10 meters to render the images. The near and far bounds are fixed at 3 m and 17 m due to the solar panel length. The scene is composed of one satellite, SMOS or CubeSat, with one directional light source fixed with regards to the targeted object. The images are rendered using viewpoints sampled on a full sphere with a unified black background. The images are rendered with a resolution of 1024 x 1024 pixels. For each image, the distance to the camera, azimuth and elevation angles are saved as metadata and a depth map is rendered for testing the predicted shape. We generate training and validation sets containing resp. 5, 10, 50 and 100 images to evaluate the model during training. We also generate a test set of 100 images from different viewing directions than the ones used in the training and validation sets. This common test set will be used to evaluate our models regardless of the number of training images used.







This page was built for dataset: Vision-based Neural Scene Representations for Spacecraft dataset