Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Semantic scene segmentation is important for a large variety of applications as it enables understanding of visual data. In particular, deep learning-based approaches have led to remarkable results in this context, allowing prediction of accurate per-pixel labels in images [14, 22]. Typically, these approaches operate on a single RGB image; however, one can easily formulate the analogous task in 3D on a per-voxel basis [5, 13, 21, 34, 40, 41], which is a common scenario in the context of 3D scene reconstruction. In contrast to 2D, the third dimension offers a unique opportunity as it not only predicts semantics, but also provides a spatial semantic map of the scene content based on the underlying 3D representation. This is particularly relevant for robotics applications since a robot relies not only on information of what is in a scene but also needs to know where things are.

In 3D, the representation of a scene is typically obtained from RGB-D surface reconstruction methods [6, 17, 26, 27] which often store scanned geometry in a 3D voxel grid where the surface is encoded by an implicit surface function such as a signed distance field [4]. One approach towards analyzing these reconstructions is to leverage a CNN with 3D convolutions, which has been used for shape classification [30, 43], and recently also for predicting dense semantic 3D voxel maps [5, 8, 36]. In theory, one could simply add an additional color channel to the voxel grid in order to incorporate RGB information; however, the limited voxel resolution prevents encoding feature-rich image data (Fig. 1).

Fig. 1.
figure 1

3DMV takes as input a reconstruction of an RGB-D scan along with its color images (left), and predicts a 3D semantic segmentation in the form of per-voxel labels (mapped to the mesh, right). The core of our approach is a joint 3D-multi-view prediction network that leverages the synergies between geometric and color features. (Color figure online)

In this work, we specifically address this problem of how to incorporate RGB information for the 3D semantic segmentation task, and leverage the combined geometric and RGB signal in a joint, end-to-end approach. To this end, we propose a novel network architecture that takes as input the 3D scene representation as well as the input of nearby views in order to predict a dense semantic label set on the voxel grid. Instead of mapping color data directly on the voxel grid, the core idea is to first extract 2D feature maps from 2D images using the full-resolution RGB input. These features are then downsampled through convolutions in the 2D domain, and the resulting 2D feature map is subsequently backprojected into 3D space. In 3D, we leverage a 3D convolutional network architecture to learn from both the backprojected 2D features as well as 3D geometric features. This way, we can join the benefits of existing approaches and leverage all available information, significantly improving on existing approaches.

Our main contribution is the formulation of a joint, end-to-end convolutional neural network which learns to infer 3D semantics from both 3D geometry and 2D RGB input. In our evaluation, we provide a comprehensive analysis of the design choices of the joint 2D-3D architecture, and compare it with current state of the art methods. In the end, our approach increases 3D segmentation accuracy from 52.8% to 75% compared to the best existing volumetric architecture.

2 Related Work

Deep Learning in 3D. An important avenue for 3D scene understanding has been opened through recent advances in deep learning. Similar to the 2D domain, convolutional neural networks (CNNs) can operate in volumetric domains using an additional spatial dimension for the filter banks. 3D ShapeNets [2] was one of the first works in this context; they learn a 3D convolutional deep belief network from a shape database. Several works have followed, using 3D CNNs for object classification [23, 30] or generative scene completion tasks [7, 8, 10]. In order to address the memory and compute requirements, hierarchical 3D CNNs have been proposed to more efficiently represent and process 3D volumes [10, 12, 32, 33, 38, 42]. The spatial extent of a 3D CNN can also be increased with dilated convolutions [44], which have been used to predict missing voxels and infer semantic labels [36], or by using a fully-convolutional networks, in order to decouple the dimensions of training and test time [8]. Very recently, we have seen also network architectures that operate on an (unstructured) point-based representation [29, 31].

Multi-view Deep Networks. An alternative way of learning a classifier on 3D input is to render the geometry, run a 2D feature extractor, and combine the extracted features using max pooling. The multi-view CNN approach by Su et al. [37] was one of the first to propose such an architecture for object classification. However, since the output is a classification score, this architecture does not spatially correlate the accumulated 2D features. Very recently, a multi-view network has been proposed for part-based mesh segmentation [18]. Here, 2D confidence maps of each part label are projected on top of ShapeNet [2] models, where a mesh-based CRF accumulates inputs of multiple images to predict the part labels on the mesh geometry. This approach handles only relatively small label sets (e.g., 2–6 part labels), and its input is 2D renderings of the 3D meshes; i.e., the multi-view input is meant as a replacement input for 3D geometry. Although these methods are not designed for 3D semantic segmentation, we consider them as the main inspiration for our multi-view component.

Multi-view networks have also been proposed in the context of stereo reconstruction. For instance, Choy et al. [3] use an RNN to accumulate features from different views and Tulsiani et al. [39] propose an unsupervised approach that takes multi-view input to learn a latent 3D space for 3D reconstruction. Multi-view networks have also been used in the context of stereo reconstruction [19, 20], leveraging feature projection into 3D to produce consistent reconstruction. An alternative way to combine several input views with 3D, is by projecting colors directly into the voxels, maintaining one channel for each input view per voxel [16]. However, due to memory requirements, this becomes impractical for a large number of input views.

3D Semantic Segmentation. Semantic segmentation on 2D images is a popular task and has been heavily explored using cutting-edge neural network approaches [14, 22]. The analog task can be formulated in 3D, where the goal is to predict semantic labels on a per-voxel level [40, 41]. Although this is a relatively recent task, it is extremely relevant to a large range of applications, in particular, robotics, where a spatial understanding of the inferred semantics is essential. For the 3D semantic segmentation task, several datasets and benchmarks have recently been developed. The ScanNet [5] dataset introduced a 3D semantic segmentation task on approx. 1.5k RGB-D scans and reconstructions obtained with a Structure Sensor. It provides ground truth annotations for training, validation, and testing directly on the 3D reconstructions; it also includes approx. 2.5 mio RGB-D frames whose 2D annotations are derived using rendered 3D-to-2D projections. Matterport3D [1] is another recent dataset of about 90 building-scale scenes in the same spirit as ScanNet; it includes fewer RGB-D frames (approx. 194,400) but has more complete reconstructions.

3 Overview

The goal of our method is to predict a 3D semantic segmentation based on the input of commodity RGB-D scans. More specifically, we want to infer semantic class labels on per-voxel level of the grid of a 3D reconstruction. To this end, we propose a joint 2D-3D neural network that leverages both RGB and geometric information obtained from a 3D scans. For the geometry, we consider a regular volumetric grid whose voxels encode a ternary state (known-occupied, known-free, unknown). To perform semantic segmentation on full 3D scenes of varying sizes, our network operates on a per-chunk basis; i.e., predicting columns of a scene in sliding-window fashion through the xy-plane at test time. For a given xy-location in a scene, the network takes as input the volumetric grid of the surrounding area (chunks of \(31 \times 31 \times 62\) voxels). The network then extracts geometric features using a series of 3D convolutions, and predicts per-voxel class labels for the center column at the current xy-location. In addition to the geometry, we select nearby RGB views at the current xy-location that overlap with the associated chunk. For all of these 2D views, we run the respective images through a 2D neural network that extracts their corresponding features. Note that these 2D networks all have the same architecture and share the same weights.

In order to combine the 2D and 3D features, we introduce a differentiable backprojection layer that maps 2D features onto the 3D grid. These projected features are then merged with the 3D geometric information through a 3D convolutional part of the network. In addition to the projection, we add a voxel pooling layer that enables handling a variable number of RGB views associated with a 3D chunk; the pooling is performed on a per-voxel basis. In order to run 3D semantic segmentation for entire scans, this network is run for each xy-location of a scene, taking as input the corresponding local chunks.

In the following, we will first introduce the details of our network architecture (see Sect. 4) and then show how we train and implement our method (see Sect. 5).

Fig. 2.
figure 2

Network overview: our architecture is composed of a 2D and a 3D part. The 2D side takes as input several aligned RGB images from which features are learned with a proxy loss. These are mapped to 3D space using a differentiable backprojection layer. Features from multiple views are max-pooled on a per-voxel basis and fed into a stream of 3D convolutions. At the same time, we input the 3D geometry into another 3D convolution stream. Then, both 3D streams are joined and the 3D per-voxel labels are predicted. The whole network is trained in an end-to-end fashion.

4 Network Architecture

Our network is composed of a 3D stream and several 2D streams that are combined in a joint 2D-3D network architecture. The 3D part takes as input a volumetric grid representing the geometry of a 3D scan, and the 2D streams take as input the associated RGB images. To this end, we assume that the 3D scan is composed of a sequence of RGB-D images obtained from a commodity RGB-D camera, such as a Kinect or a Structure Sensor; although note that our method generalizes to other sensor types. We further assume that the RGB-D images are aligned with respect to their world coordinate system using an RGB-D reconstruction framework; in the case of ScanNet [5] scenes, the BundleFusion [6] method is used. Finally, the RGB-D images are fused together in a volumetric grid, which is commonly done by using an implicit signed distance function [4]. An overview of the network architecture is provided in Fig. 2.

4.1 3D Network

Our 3D network part is composed of a series of 3D convolutions operating on a regular volumetric gird. The volumetric grid is a subvolume of the voxelized 3D representation of the scene. Each subvolume is centered around a specific xy-location at a size of \(31\times 31\times 62\) voxels, with a voxel size of 4.8 cm. Hence, we consider a spatial neighborhood of 1.5 m \(\times \) 1.5 m and 3 m in height. Note that we use a height of 3 m in order to cover the height of most indoor environments, such that we only need to train the network to operate in varying xy-space. The 3D network takes these subvolumes as input, and predicts the semantic labels for the center columns of the respective subvolume at a resolution of \(1\times 1\times 62\) voxels; i.e., it simultaneously predicts labels for 62 voxels. For each voxel, we encode the corresponding value of the scene reconstruction state: known-occupied (i.e., on the surface), known-free space (i.e., based on empty space carving [4]), or unknown space (i.e., we have no knowledge about the voxel). We represent this through a 2-channel volumetric grid, the first a binary encoding of the occupancy, and the second a binary encoding of the known/unknown space. The 3D network then processes these subvolumes with a series of nine 3D convolutions which expand the feature dimension and reduce the spatial dimensions, along with dropout regularization during training, before a final set of fully connected layers which predict the classification scores for each voxel.

In the following, we show how to incorporate learned 2D features from associated 2D RGB views.

4.2 2D Network

The aim of the 2D part of the network is to extract features from each of the input RGB images. To this end, we use a 2D network architecture based on ENet [28] to learn those features. Note that although we can use a variable of number of 2D input views, all 2D networks share the same weights as they are jointly trained. Our choice to use ENet is due to its simplicity as it is both fast to run and memory-efficient to train. In particular, the low memory requirements are critical since it allows us to jointly train our 2D-3D network in an end-to-end fashion with multiple input images per train sample. Although our aim is 2D-3D end-to-end training, we additionally use a 2D proxy loss for each image that allows us to make the training more stable; i.e., each 2D stream is asked to predict meaningful semantic features for an RGB image segmentation task. Here, we use semantic labels of the 2D images as ground truth; in the case of ScanNet [5], these are derived from the original 3D annotations by rendering the annotated 3D mesh from the camera points of the respective RGB image poses. The final goal of the 2D network is to obtain the features in the last layer before the proxy loss per-pixel classification scores; these features maps are then backprojected into 3D to join with the 3D network, using a differentiable backprojection layer. In particular, from an input RGB image of size \(328\times 256\), we obtain a 2D feature map of size \((128\times ) 41\times 32\), which is then backprojected into the space of the corresponding 3D volume, obtaining a 3D representation of the feature map of size \((128\times ) 31\times 31\times 62\).

4.3 Backprojection Layer

In order to connect the learned 2D features from each of the input RGB views with the 3D network, we use a differentiable backprojection layer. Since we assume known 6-DoF pose alignments for the input RGB images with respect to each other and the 3D reconstruction, we can compute 2D-3D associations on-the-fly. The layer is essentially a loop over every voxel in 3D subvolume where a given image is associated to. For every voxel, we compute the 3D-to-2D projection based on the corresponding camera pose, the camera intrinsics, and the world-to-grid transformation matrix. We use the depth data from the RGB-D images in order to prune projected voxels beyond a threshold of the voxel size of 4.8 cm; i.e., we compute only associations for voxels close to the geometry of the depth map. We compute the correspondences from 3D voxels to 2D pixels since this allows us to obtain a unique voxel-to-pixel mapping. Although one could pre-compute these voxel-to-pixel associations, we simply compute this mapping on-the-fly in the layer as these computations are already highly memory bound on the GPU; in addition, it saves significant disk storage since this it would involve a large amount of index data for full scenes.

Once we have computed voxel-to-pixel correspondences, we can project the features of the last layer of the 2D network to the voxel grid:

$$n_{feat}\times w_{2d} \times h_{2d} \rightarrow n_{feat}\times w_{3d} \times h_{3d} \times d_{3d}$$

For the backward pass, we use the inverse mapping of the forward pass, which we store in a temporary index map. We use 2D feature maps (feature dim. of 128) of size \((128 \times ) 41 \times 31\) and project them to a grid of size \((128\times ) 31\times 31\times 62\).

In order to handle multiple 2D input streams, we compute voxel-to-pixel associations with respect to each input view. As a result, some voxels will be associated with multiple pixels from different views. In order to combine projected features from multiple input views, we use a voxel max-pooling operation that computes the maximum response on a per feature channel basis. Since the max pooling operation is invariant to the number of inputs, it enables selecting for the features of interest from an arbitrary number of input images.

4.4 Joint 2D-3D Network

The joint 2D-3D network combines 2D RGB features and 3D geometric features using the mapping from the backprojection layer. These two inputs are processed with a series of 3D convolutions, and then concatenated together; the joined feature is then further processed with a set of 3D convolutions. We have experimented with several options as to where to join these two parts: at the beginning (i.e., directly concatenated together without independent 3D processing), approximately 1/3 or 2/3 through the 3D network, and at the end (i.e., directly before the classifier). We use the variant that provided the best results, fusing the 2D and 3D features together at 2/3 of the architectures (i.e., after the 6th 3D convolution of 9); see Table 5 for the corresponding ablation study. Note that the entire network, as shown in Fig. 2, is trained in an end-to-end fashion, which is feasible since all components are differentiable. Table 1 shows an overview of the distribution of learnable parameters of our 3DMV model.

Table 1. Distribution of learnable parameters of our 3DMV model. Note that the majority of the network weights are part of the combined 3D stream just before the per-voxel predictions where we rely on strong feature maps; see top left of Fig. 2.

4.5 Evaluation in Sliding Window Mode

Our joint 2D-3D network operates on a per-chunk basis; i.e., it takes fixed subvolumes of a 3D scene as input (along with associated RGB views), and predicts labels for the voxels in the center column of the given chunk. In order to perform a semantic segmentation of large 3D environments, we slide the subvolume through the 3D grid of the underlying reconstruction. Since the height of the subvolume (3 m) is sufficient for most indoor environments, we only need to slide over the xy-domain of the scene. Note, however, that for training, the training samples do not need to be spatially connected, which allows us to train on a random set of subvolumes. This de-coupling of training and test extents is particularly important since it allows us to provide a good label and data distribution of training samples (e.g., chunks with sufficient coverage and variety).

5 Training

5.1 Training Data

We train our joint 2D-3D network architecture in an end-to-end fashion. To this end, we prepare correlated 3D and RGB input to the network for the training process. The 3D geometry is encoded in a ternary occupancy grid that encodes known-occupied, known-free, and unknown states for each voxel. The ternary information is split upon 2 channels, where the first channel encodes occupancy and the second channel encodes the known vs. unknown state. To select train subvolumes from a 3D scene, we randomly sample subvolumes as potential training samples. For each potential train sample, we check its label distribution and discard samples containing only structural elements (i.e., wall/floor) with \(95\%\) probability. In addition, all samples with empty center columns are discarded as well as samples with less than 70% of the center column geometry annotated.

For each subvolume, we then associate k nearby RGB images whose alignment is known from the 6-DoF camera pose information. We select images greedily based on maximum coverage; i.e., we first pick the image covering the most voxels in the subvolume, and subsequently take each next image which covers the most number of voxels not covered by current set. We typically select 3–5 images since additional gains in coverage become smaller with each added image. For each sampled subvolume, we augment it with 8 random rotations for a total of 1,316,080 train samples. Since existing 3D datasets, such as ScanNet [5] or Matterport3D [1] contain unannotated regions in the ground truth (see Fig. 3, right), we mask out these regions in both our 3D loss and 2D proxy loss. Note that this strategy still allows for making predictions for all voxels at test time.

5.2 Implementation

We implement our approach in PyTorch. While 2D and 3D conv layers are already provided by the PyTorch API, we implement a custom layer for the backprojection layer. We implement this backprojection in python, as a custom PyTorch layer, representing the projection as series of matrix multiplications in order to exploit PyTorch parallelization, and run the backprojection on the GPU through the PyTorch API. For training, we have tried only training parts of the network; however, we found that the end-to-end version that jointly optimizes both 2D and 3D performed best. In the training processes, we use an SGD optimizer with a learning rate of 0.001 and a momentum of 0.9; we set the batch size to 8. Note that our training set is quite biased towards structural classes (e.g., wall, floor), even when discarding most structural-only samples, as these elements are vastly dominant in indoor scenes. In order to account for this data imbalance, we use the histogram of classes represented in the train set to weight the loss during training. We train our network for 200, 000 iterations; for our network trained on 3 views, this takes \({\approx }24\) h, and for 5 views, \({\approx }48\) h.

6 Results

In this section, we provide an evaluation of our proposed method with a comparison to existing approaches. We evaluate on the ScanNet dataset [5], which contains 1513 RGB-D scans composed of 2.5M RGB-D images. We use the public train/val/test split of 1045, 156, 312 scenes, respectively, and follow the 20-class semantic segmentation task defined in the original ScanNet benchmark. We evaluate our results with per-voxel class accuracies, following the evaluations of previous work [5, 8, 31]. Additionally, we visualize our results qualitatively and in comparison to previous work in Fig. 3, with close-ups shown in Fig. 4. Note that we map the predictions from all methods back onto the mesh reconstruction for ease of visualization.

Comparison to State of the Art. Our main results are shown in Table 2, where we compare to several state-of-the-art volumetric (ScanNet [5], ScanComplete [8]) and point-based approaches (PointNet++[31]) on the ScanNet test set. Additionally, we show an ablation study regarding our design choices in Table 3.

The best variant of our 3DMV network achieves 75% average classification accuracy which is quite significant considering the difficulty of the task and the performance of existing approaches. That is, we improve 22.2% over existing volumetric and 14.8% over the state-of-the-art PointNet++ architecture.

How Much Does RGB Input Help? Table 3 includes a direct comparison between our 3D network architecture when using RGB features against the exact same 3D network without the RGB input. Performance improves from 54.4% to 70.1% with RGB input, even with just a single RGB view. In addition, we tried out the naive alternative of using per-voxel colors rather than a 2D feature extractor. Here, we see only a marginal difference compared to the purely geometric baseline (54.4% vs. 55.9%). We attribute this relatively small gain to the limited grid resolution (\({\approx }5\) cm voxels), which is insufficient to capture rich RGB features. Overall, we can clearly see the benefits of RGB input, as well as the design choice to first extract features in the 2D domain.

How Much Does Geometric Input Help? Another important question is whether we actually need the 3D geometric input, or whether geometric information is a redundant subset of the RGB input; see Table 3. The first experiment we conduct in this context is simply a projection of the predicted 2D labels on top of the geometry. If we only use the labels from a single RGB view, we obtain 27% average accuracy (vs. 70.1% with 1 view + geometry); for 3 views, this label backprojection achieves 44.2% (vs. 73.0% with 3 views + geometry). Note that this is related to the limited coverage of the RGB backprojections (see Table 4).

However, the interesting experiment now is what happens if we still run a series of 3D convolutions after the backprojection of the 2D labels. Again, we omit inputting the scene geometry, but we now learn how to combine and propagate the backprojected features in the 3D grid; essentially, we ignore the first part of our 3D network; cf. Fig. 2. For 3 RGB views, this results in an accuracy of 58.2%; this is higher than the 54.4% of geometry only; however, it is much lower than our final 3-view result of 73.0% from the joint network. Overall, this shows that the combination of RGB and geometric information aptly complements each other, and that the synergies allow for an improvement over the individual inputs by 14.8% and 18.6%, respectively (for 3 views).

Table 2. Comparison of our final trained model (5 views, end-to-end) against other state-of-the-art methods on the ScanNet dataset [5]. We can see that our approach makes significant improvements, 22.2% over existing volumetric and approx. 14.8% over state-of-the-art PointNet++ architectures.
Fig. 3.
figure 3

Qualitative semantic segmentation results on the ScanNet [5] test set. We compare with the 3D-based approaches of ScanNet [5], ScanComplete [8], PointNet++ [31]. Note that the ground truth scenes contain some unannotated regions, denoted in black. Our joint 3D-multi-view approach achieves more accurate semantic predictions.

How to Feed 2D Features into the 3D Network? An interesting question is where to join 2D and 3D features; i.e., at which layer of the 3D network do we fuse together the features originating from the RGB images with the features from the 3D geometry. On the one hand, one could argue that it makes more sense to feed the 2D part early into the 3D network in order to have more capacity for learning the joint 2D-3D combination. On the other hand, it might make more sense to keep the two streams separate for as long as possible to first extract strong independent features before combining them.

To this end, we conduct an experiment with different 2D-3D network combinations (for simplicity, always using a single RGB view without end-to-end training); see Table 5. We tried four combinations, where we fused the 2D and 3D features at the beginning, after the first third of the network, after the second third, and at the very end into the 3D network. Interestingly, the results are relatively similar ranging from 67.6%, 65.4% to 69.1% and 67.5% suggesting that the 3D network can adapt quite well to the 2D features. Across these experiments, the second third option turned out to be a few percentage points higher than the alternatives; hence, we use that as a default in all other experiments.

How Much Do Additional Views Help? In Table 3, we also examine the effect of each additional view on classification performance. For geometry only, we obtain an average classification accuracy of 54.4%; adding only a single view per chunk increases to 70.1% (+15.7%); for 3 views, it increases to 73.1% (+3.0%); for 5 views, it reaches 75.0% (+1.9%). Hence, for every additional view the incremental gains become smaller; this is somewhat expected as a large part of the benefits are attributed to additional coverage of the 3D volume with 2D features. If we already use a substantial number of views, each additional added feature shares redundancy with previous views, as shown in Table 4.

Is End-to-End Training of the Joint 2D-3D Network Useful? Here, we examine the benefits of training the 2D-3D network in an end-to-end fashion, rather than simply using a pre-trained 2D network. We conduct this experiment with 1, 3, and 5 views. The end-to-end variant consistently outperforms the fixed version, improving the respective accuracies by 1.0%, 0.2%, and 0.5%. Although the end-to-end variants are strictly better, the increments are smaller than we initially hoped for. We also tried removing the 2D proxy loss that enforces good 2D predictions, which led to a slightly lower performance. Overall, end-to-end training with a proxy loss always performed best and we use it as our default.

Table 3. Ablation study for different design choices of our approach on ScanNet [5]. We first test simple baselines where we backproject 2D labels from 1 and 3 views (rows 1–2), then run set of 3D convs after the backprojections (row 3). We then test a 3D-geometry-only network (row 4). Augmenting the 3D-only version with per-voxel colors shows only small gains (row 5). In rows 6–11, we test our joint 2D-3D architecture with varying number of views, and the effect of end-to-end training. Our 5-view, end-to-end variant performs best.

Evaluation in 2D Domains Using NYUv2. Although we are predicting 3D per-voxel labels, we can also project the obtained voxel labels into the 2D images. In Table 6, we show such an evaluation on the NYUv2 [35] dataset. For this task, we train our network on both ScanNet data as well as the NYUv2 train annotations projected into 3D. Although this is not the actual task of our method, it can be seen as an efficient way to accumulate semantic information from multiple RGB-D frames by using the 3D geometry as a proxy for the learning framework. Overall, our joint 2D-3D architecture compares favorably against the respective baselines on this 13-class task.

Table 4. Amount of coverage from varying number of views over the annotated ground truth voxels of the ScanNet [5] test scenes.
Table 5. Evaluation of various network combinations for joining the 2D and 3D streams in the 3D architecture (cf. Fig. 2, top). We use the single view variant with a fixed 2D network here for simplicity. Interestingly, performance only changes slightly; however, the 2/3 version performed the best, which is our default for all other experiments.
Fig. 4.
figure 4

Additional qualitative semantic segmentation results (close ups) on the ScanNet [5] test set. Note the consistency of our predictions compared to the other baselines.

Table 6. We can also evaluate our method on 2D semantic segmentation tasks by projecting the predicted 3D labels into the respective RGB-D frames. Here, we show a comparison on dense pixel classification accuracy on NYU2 [25]. Note that the reported ScanNet classification is on the 11-class task.

Summary Evaluation.

  • RGB and geometric features are orthogonal and help each other.

  • More views help, but increments get smaller with every view.

  • End-to-end training is strictly better, but the improvement is not that big.

  • Variations of where to join the 2D and 3D features change performance to some degree; 2/3 performed best in our tests.

  • Our results are significantly better than the best volumetric or PointNet baseline (+22.2% and +14.8%, respectively).

Limitations. While our joint 3D-multi-view approach achieves significant performance gains over previous state of the art in 3D semantic segmentation, there are still several important limitations. Our approach operates on dense volumetric grids, which become quickly impractical for high resolutions; e.g., RGB-D scanning approaches typically produce reconstructions with sub-centimeter voxel resolution; sparse approaches, such as OctNet [33], might be a good remedy. Additionally, we currently predict only the voxels of each column of a scene jointly, while each column is predicted independently, which can give rise to some label inconsistencies in the final predictions since different RGB views might be selected; note, however, that due to the convolutional nature of the 3D networks, the geometry remains spatially coherent.

7 Conclusion and Future Work

We presented 3DMV, a joint 3D-multi-view approach built on the core idea of combining geometric and RGB features in a joint network architecture. We show that our joint approach can achieve significantly better accuracy for semantic 3D scene segmentation. In a series of evaluations, we carefully examine our design choices; for instance, we demonstrate that the 2D and 3D features complement each other rather than being redundant; we also show that our method can successfully take advantage of using several input views from an RGB-D sequence to gain higher coverage, thus resulting in better performance. In the end, we are able to show results at more than 14% higher classification accuracy than the best existing 3D segmentation approach. Overall, we believe that these improvements will open up new possibilities where not only the semantic content, but also the spatial 3D layout plays an important role.

For the future, we still see many open questions in this area. First, the 3D semantic segmentation problem is far from solved, and semantic instance segmentation in 3D is still at its infancy. Second, there are many fundamental questions about the scene representation for realizing 3D convolutional neural networks, and how to handle mixed sparse-dense data representations. And third, we also see tremendous potential for combining multi-modal features for generative tasks in 3D reconstruction, such as scan completion and texturing.