A Point Cloud (PC) describes a three-dimensional object with an unordered list of coordinates of points sampled on the object's surface. Most often, point cloud data are generated by 3D laser scanners and LiDAR (light detection and ranging) technology. Today point clouds are becoming ubiquitous in many applications, including autonomous driving and navigation, robotics, virtual and augmented reality. Since Point Clouds can count hundreds of millions of points storing them and processing in their raw form quickly becomes a bottleneck of a processing system. Applying deep learning models on point cloud data directly is not possible because they are not structured and not ordered. Thus, common deep learning models such as convolutional neural networks (CNN) cannot be applied directly, as their basic convolution operation assumes data to be ordered, regular and on a structured grid. Early approaches were converting point clouds into a structured grid format, often applying voxelization but these methods do not scale well to dense 3D data (their computation and memory footprint grows cubically with the resolution).
Currently, the emerging field of geometric deep learning gains popularity as it can consume raw point cloud data. The pioneering PointNet  architecture handles unordered sets of real-valued points, by a permutation-invariant feature aggregating function. Its extensions include hierarchical models, such as PointNet++  and many others. A recent survey  gives a comprehensive overview of these and related developments in deep learning for 3D point clouds.
An important open problem is lack of semi-supervised approaches for point clouds. While deep learning methods typically rely on the availability of huge amounts of training data, the labelled data in the domain of PC processing are very limited. Thus, in this thesis we want to explore development of generative learning models for point clouds with only partial supervision (semi-supervised methods).
Figure 1: Processing point clouds with PointNet .
Figure 2: Architecture form .
The goal of this thesis is to explore generation and reconstruction performance of deep generative models for 3D point clouds and to explore the potential of these models in unsupervised classification of point cloud data. The models should be trained such to generate a set of points in 3D space that resemble the objects on which the model has been previously trained on. Visualization of the latent vectors will serve for clustering. We will build on the PointNet architecture employing variational autoencoder (VAE).
The concrete tasks are
1. Familiarize with the current literature regarding geometric deep learning for point cloud data (PointNet and PointNet++) architectures.
2. Incorporate autoencoders into these models following recent approaches like ,
3. Run experiments on the Model40 dataset  and based on the results consider possible improvements in the model.
4. Explore the potential of the model for clustering point cloud data using T-SNE visualization of the latent vectors.
This thesis will be conducted in the scope of an ongoing research project in collaboration with the partners from VUB/ETRO.
 C. R. Qi, H. Su, K. Mo and L. J. Guibas, “PointNet: Deep learning on point sets for 3d classification and segmentation,” IEEE Conf. Computer Vision Pattern Recognition, CVPR, 2017.
 C. R. Qi, L. Yi, H. Su and L. J. Guibas, "PointNet++: Deep hierarchical feature learning on point sets in a metric space," Advances in Neural Information Process. Systems, pp. 5099-108, 2017.
 Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu and M. Bennamoun, "Deep Learning for 3D Point Clouds: A Survey," IEEE Transactions on Pattern Analysis and Machine Intelligence, pvol. 43, pp. 4338-4364, Dec. 2021,
 E. Remelli, P. Baque, P. Fua, “NeuralSampler: Euclidean Point Cloud Auto-Encoder and Sampler”, Available: http://arxiv.org/abs/1901.09394 (2019).
 L. Kong, P. Rajak and S. Shakeri, “Generative Models for 3D Point Clouds,” Preprint available from https://rajak7.github.io/pdf/CS_236.pdf