Keywords

1 Introduction

Convolutional Neural Networks (CNNs) excel at a wide array of image recognition tasks [1,2,3]. However, their ability to learn effective representations of images requires large amounts of labelled training data [4, 5]. Annotating training data is a particular bottleneck in the case of segmentation, where labelling each pixel in the image by hand is particularly time-consuming. This is illustrated by the Cityscapes dataset where finely annotating a single image took “more than 1.5 h on average” [6]. In this paper, we address the problems of semantic- and instance-segmentation using only weak annotations in the form of bounding boxes and image-level tags. Bounding boxes take only 7 s to draw using the labelling method of [7], and image-level tags an average of 1 s per class [8]. Using only these weak annotations would correspond to a reduction factor of 30 in labelling a Cityscapes image which emphasises the importance of cost-effective, weak annotation strategies.

Our work differs from prior art on weakly-supervised segmentation [9,10,11,12,13] in two primary ways: Firstly, our model jointly produces semantic- and instance-segmentations of the image, whereas the aforementioned works only output instance-agnostic semantic segmentations. Secondly, we consider the segmentation of both “thing” and “stuff” classes [14, 15], in contrast to most existing work in both semantic- and instance-segmentation which only consider “things”.

We define the problem of instance segmentation as labelling every pixel in an image with both its object class and an instance identifier [16, 17]. It is thus an extension of semantic segmentation, which only assigns each pixel an object class label. “Thing” classes (such as “person” and “car”) are countable and are also studied extensively in object detection [18, 19]. This is because their finite extent makes it possible to annotate tight, well-defined bounding boxes around them. “Stuff” classes (such as “sky” and “vegetation”), on the other hand, are amorphous regions of homogeneous or repetitive textures [14]. As these classes have ambiguous boundaries and no well-defined shape they are not appropriate to annotate with bounding boxes [20]. Since “stuff” classes are not countable, we assume that all pixels of a stuff category belong to the same, single instance. Recently, this task of jointly segmenting “things” and “stuff” at an instance-level has also been named “Panoptic Segmentation” by [21].

Fig. 1.
figure 1

We propose a method to train an instance segmentation network from weak annotations in the form of bounding-boxes and image-level tags. Our network can explain both “thing” and “stuff” classes in the image, and does not produce overlapping instances as common detector-based approaches [22,23,24].

Note that many popular instance segmentation algorithms which are based on object detection architectures [22,23,24,25,26] are not suitable for this task, as also noted by [21]. These methods output a ranked list of proposed instances, where the different proposals are allowed to overlap each other as each proposal is processed independently of the other. Consequently, these architectures are not suitable where each pixel in the image has to be explained, and assigned a unique label of either a “thing” or “stuff” class as shown in Fig. 1. This is in contrast to other instance segmentation methods such as [16, 27,28,29,30].

In this work, we use weak bounding box annotations for “thing” classes, and image-level tags for “stuff” classes. Whilst there are many previous works on semantic segmentation from image-level labels, the best performing ones [10, 31,32,33] used a saliency prior. The salient parts of an image are “thing” classes in popular saliency datasets [34,35,36] and this prior therefore does not help at all in segmenting “stuff” as in our case. We also consider the “semi-supervised” case where we have a mixture of weak- and fully-labelled annotations.

To our knowledge, this is the first work which performs weakly-supervised, non-overlapping instance segmentation, allowing our model to explain all “thing” and “stuff” pixels in the image (Fig. 1). Furthermore, our model jointly produces semantic- and instance-segmentations of the image, which to our knowledge is the first time such a model has been trained in a weakly-supervised manner. Moreover, to our knowledge, this is the first work to perform either weakly supervised semantic- or instance-segmentation on the Cityscapes dataset. On Pascal VOC, our method achieves about 95% of fully-supervised accuracy on both semantic- and instance-segmentation. Furthermore, we surpass the state-of-the-art on fully-supervised instance segmentation as well. Finally, we use our weakly- and semi-supervised framework to examine how model performance varies with the number of examples in the training set and the annotation quality of each example, with the aim of helping dataset creators better understand the trade-offs they face in this context.

2 Related Work

Instance segmentation is a popular area of scene understanding research. Most top-performing algorithms modify object detection networks to output a ranked list of segments instead of boxes [22,23,24,25,26, 37]. However, all of these methods process each instance independently and thus overlapping instances are produced – one pixel can be assigned to multiple instances simultaneously. Additionally, object detection based architectures are not suitable for labelling “stuff” classes which cannot be described well by bounding boxes [20]. These limitations, common to all of these methods, have also recently been raised by Kirillov et al. [21]. We observe, however, that there are other instance segmentation approaches based on initial semantic segmentation networks [16, 27,28,29] which do not produce overlapping instances and can naturally handle “stuff” classes. Our proposed approach extends methods of this type to work with weaker supervision.

Although prior work on weakly-supervised instance segmentation is limited, there are many previous papers on weak semantic segmentation, which is also relevant to our task. Early work in weakly-supervised semantic segmentation considered cases where images were only partially labelled using methods based on Conditional Random Fields (CRFs) [38, 39]. Subsequently, many approaches have achieved high accuracy using only image-level labels [9, 10, 40, 41], bounding boxes [11, 12, 42], scribbles [20] and points [13]. A popular paradigm for these works is “self-training” [43]: a model is trained in a fully-supervised manner by generating the necessary ground truth with the model itself in an iterative, Expectation-Maximisation (EM)-like procedure [11, 12, 20, 41]. Such approaches are sensitive to the initial, approximate ground truth which is used to bootstrap training of the model. To this end, Khoreva et al. [42] showed how, given bounding box annotations, carefully chosen unsupervised foreground-background and segmentation-proposal algorithms could be used to generate high-quality approximate ground truth such that iterative updates to it were not required thereafter.

Our work builds on the “self-training” approach to perform instance segmentation. To our knowledge, only Khoreva et al. [42] have published results on weakly-supervised instance segmentation. However, the model used by [42] was not competitive with the existing instance segmentation literature in a fully-supervised setting. Moreover, [42] only considered bounding-box supervision, whilst we consider image-level labels as well. Recent work by [44] modifies Mask-RCNN [22] to train it using fully-labelled examples of some classes, and only bounding box annotations of others. Our proposed method can also be used in a semi-supervised scenario (with a mixture of fully- and weakly-labelled training examples), but unlike [44], our approach works with only weak supervision as well. Furthermore, in contrast to [42, 44], our method does not produce overlapping instances, handles “stuff” classes and can thus explain every pixel in an image as shown in Fig. 1.

3 Proposed Approach

We first describe how we generate approximate ground truth data to train semantic- and instance-segmentation models with in Sect. 3.1 through 3.4. Thereafter, in Sect. 3.5, we discuss the network architecture that we use.

3.1 Training with Weaker Supervision

In a fully-supervised setting, semantic segmentation models are typically trained by performing multinomial logistic regression independently for each pixel in the image. The loss function, the cross entropy between the ground-truth distribution and the prediction, can be written as

$$\begin{aligned} L = -\sum _{i \in \varOmega }{\log {p(l_i | \mathbf {I})}} \end{aligned}$$
(1)

where \(l_i\) is the ground-truth label at pixel i, \(p(l_i | \mathbf {I})\) is the probability (obtained from a softmax activation) predicted by the neural network for the correct label at pixel i of image \(\mathbf {I}\) and \(\varOmega \) is the set of pixels in the image.

In the weakly-supervised scenarios considered in this paper, we do not have reliable annotations for all pixels in \(\varOmega \). Following recent work [9, 13, 41, 42], we use our weak supervision and image priors to approximate the ground-truth for a subset \(\varOmega ' \subset \varOmega \) of the pixels in the image. We then train our network using the estimated labels of this smaller subset of pixels. Section 3.2 describes how we estimate \(\varOmega '\) and the corresponding labels for images with only bounding-box annotations, and Sect. 3.3 for image-level tags.

Our approach to approximating the ground truth is based on the principle of only assigning labels to pixels which we are confident about, and marking the remaining set of pixels, \(\varOmega \setminus \varOmega '\), as “ignore” regions over which the loss is not computed. This is motivated by Bansal et al. [45] who observed that sampling only 4% of the pixels in the image for computing the loss during fully-supervised training yielded about the same results as sampling all pixels, as traditionally done. This supported their hypothesis that most of the training data for a pixel-level task is statistically correlated within an image, and that randomly sampling a much smaller set of pixels is sufficient. Moreover, [46, 47] showed improved results by respectively sampling only 6% and 12% of the hardest pixels, instead of all of them, in fully-supervised training.

3.2 Approximate Ground Truth from Bounding Box Annotations

We use GrabCut [48] (a classic foreground segmentation technique given a bounding-box prior) and MCG [50] (a segment-proposal algorithm) to obtain a foreground mask from a bounding-box annotation, following [42]. To achieve high precision in this approximate labelling, a pixel is only assigned to the object class represented by the bounding box if both GrabCut and MCG agree (Fig. 2).

Fig. 2.
figure 2

An example of generating approximate ground truth from bounding box annotations for an image (a). A pixel is labelled the with the bounding-box label if it belongs to the foreground masks of both GrabCut [48] and MCG [49] (b). Approximate instance segmentation ground truth is generated using the fact that each bounding box corresponds to an instance (c). Grey regions are “ignore” labels over which the loss is not computed due to ambiguities in label assignment.

Note that the final stage of MCG uses a random forest trained with pixel-level supervision on Pascal VOC to rank all the proposed segments. We do not perform this ranking step, and obtain a foreground mask from MCG by selecting the proposal that has the highest Intersection over Union (IoU) with the bounding box annotation.

This approach is used to obtain labels for both semantic- and instance-segmentation as shown in Fig. 2. As each bounding box corresponds to an instance, the foreground for each box is the annotation for that instance. If the foreground of two bounding boxes of the same class overlap, the region is marked as “ignore” as we do not have enough information to attribute it to either instance.

3.3 Approximate Ground-Truth from Image-Level Annotations

When only image-level tags are available, we leverage the fact that CNNs trained for image classification still have localisation information present in their convolutional layers [51]. Consequently, when presented with a dataset of only images and their tags, we first train a network to perform multi-label classification. Thereafter, we extract weak localisation cues for all the object classes that are present in the image (according to the image-level tags). These localisation heatmaps (as shown in Fig. 3) are thresholded to obtain the approximate ground-truth for a particular class. It is possible for localisation heatmaps for different classes to overlap. In this case, thresholded heatmaps occupying a smaller area are given precedence. We found this rule, like [9], to be effective in preventing small or thin objects from being missed.

Fig. 3.
figure 3

Approximate ground truth generated from image-level tags using weak localisation cues from a multi-label classification network. Cluttered scenes from Cityscapes with full “stuff” annotations makes weak localisation more challenging than Pascal VOC and ImageNet that only have “things” labels. Black regions are labelled “ignore”. Colours follow Cityscapes convention.

Though this approach is independent of the weak localisation method used, we used Grad-CAM [52]. Grad-CAM is agnostic to the network architecture unlike CAM [51] and also achieves better performance than Excitation BP [53] on the ImageNet localisation task [4].

We cannot differentiate different instances of the same class from only image tags as the number of instances is unknown. This form of weak supervision is thus appropriate for “stuff” classes which cannot have multiple instances. Note that saliency priors, used by many works such as [10, 31, 32] on Pascal VOC, are not suitable for “stuff” classes as popular saliency datasets [34,35,36] only consider “things” to be salient.

3.4 Iterative Ground Truth Approximation

The ground truth approximated in Sects. 3.2 and 3.3 can be used to train a network from random initialisation. However, the ground truth can subsequently be iteratively refined by using the outputs of the network on the training set as the new approximate ground truth as shown in Fig 4. The network’s output is also post-processed with DenseCRF [54] using the parameters of Deeplab [55] (as also done by [9, 42]) to improve the predictions at boundaries. Moreover, any pixel labelled a “thing” class that is outside the bounding-box of the “thing” class is set to “ignore” as we are certain that a pixel for a thing class cannot be outside its bounding box. For a dataset such as Pascal VOC, we can set these pixels to be “background” rather than “ignore”. This is because “background” is the only “stuff” class in the dataset.

Fig. 4.
figure 4

By using the output of the trained network, the initial approximate ground truth produced according to Sects. 3.2 and 3.3 (Iteration 0) can be improved. Black regions are “ignore” labels over which the loss is not computed in training. Note for instance segmentation, permutations of instance labels of the same class are equivalent.

3.5 Network Architecture

Using the approximate ground truth generation method described in this section, we can train a variety of segmentation models. Moreover, we can trivially combine this with full human-annotations to operate in a semi-supervised setting. We use the architecture of Arnab et al. [16] as it produces both semantic- and instance-segmentations, and can be trained end-to-end, given object detections. This network consists of a semantic segmentation subnetwork, followed by an instance subnetwork which partitions the initial semantic segmentation into an instance segmentation with the aid of object detections, as shown in Fig. 5.

Fig. 5.
figure 5

Overview of the network architecture. An initial semantic segmentation is partitioned into an instance segmentation, using the output of an object detector as a cue. Dashed lines indicate paths which are not backpropagated through during training.

We denote the output of the first module, which can be any semantic segmentation network, as \(\mathbf {Q}\) where \(Q_i(l)\) is the probability of pixel i of being assigned semantic label l. The instance subnetwork has two inputs – \(\mathbf {Q}\) and a set of object detections for the image. There are D detections, each of the form \(\left( l_d, s_d, B_d \right) \) where \(l_d\) is the detected class label, \(s_d \in [0,1]\) the score and \(B_d\) the set of pixels lying within the bounding box of the d th detection. This model assumes that each object detection represents a possible instance, and it assigns every pixel in the initial semantic segmentation an instance label using a Conditional Random Field (CRF). This is done by defining a multinomial random variable, \(X_i\), at each of the N pixels in the image, with \(\mathbf {X} = [X_1, X_2 \ldots , X_N]^{\top }\). This variable takes on a label from the set \(\{1,\ldots ,D\}\) where D is the number of detections. This formulation ensures that each pixel can only be assigned one label. The energy of the assignment \(\mathbf {x}\) to all instance variables \(\mathbf {X}\) is then defined as

$$\begin{aligned} E(\mathbf {X} = \mathbf {x}) = -\sum _{i}^{N} \ln \left( w_1 \psi _{Box}(x_i) + w_2 \psi _{Global}(x_i) + \epsilon \right) + \sum _{i < j}^{N}\psi _{Pairwise}(x_i, x_j). \end{aligned}$$
(2)

The first unary term, the box term, encourages a pixel to be assigned to the instance represented by a detection if it falls within its bounding box,

$$\begin{aligned} \psi _{Box}(X_i = k) = {\left\{ \begin{array}{ll} s_k Q_i(l_k) &{} \text {if } i \in B_k\\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
(3)

Note that this term is robust to false-positive detections [16] since it is low if the semantic segmentation at pixel i, \(Q_i(l_k)\) does not agree with the detected label, \(l_k\). The global term,

$$\begin{aligned} \psi _{Global}(X_i = k) = Q_{i}(l_k), \end{aligned}$$
(4)

is independent of bounding boxes and can thus overcome errors in mislocalised bounding boxes not covering the whole instance. Finally, the pairwise term is the common densely-connected Gaussian and bilateral filter [54] encouraging appearance and spatial consistency.

In contrast to [16], we also consider stuff classes (which object detectors are not trained for), by simply adding “dummy” detections covering the whole image with a score of 1 for all stuff classes in the dataset. This allows our network to jointly segment all “things” and “stuff” classes at an instance level. As mentioned before, the box and global unary terms are not affected by false-positive detections arising from detections for classes that do not correspond to the initial semantic segmentation \(\mathbf {Q}\). The Maximum-a-Posteriori (MAP) estimate of the CRF is the final labelling, and this is obtained by using mean-field inference, which is formulated as a differentiable, recurrent network [56, 57].

We first train the semantic segmentation subnetwork using a standard cross-entropy loss with the approximate ground truth described in Sects. 3.2 and 3.3. Thereafter, we append the instance subnetwork and finetune the entire network end-to-end. For the instance subnetwork, the loss function must take into account that different permutations of the same instance labelling are equivalent. As a result, the ground truth is “matched” to the prediction before the cross-entropy loss is computed as described in [16].

4 Experimental Evaluation

4.1 Experimental Set-Up

Datasets and Weak Supervision. We evaluate on two standard segmentation datasets, Pascal VOC [18] and Cityscapes [6]. Our weakly- and fully-supervised experiments are trained with the same images, but in the former case, pixel-level ground truth is approximated as described in Sect. 3.1 through 3.4.

Pascal VOC has 20 “thing” classes annotated, for which we use bounding box supervision. There is a single “background” class for all other object classes. Following common practice on this dataset, we utilise additional images from the SBD dataset [58] to obtain a training set of 10582 images. In some of our experiments, we also use 54000 images from Microsoft COCO [19] only for the initial pretraining of the semantic subnetwork. We evaluate on the validation set, of 1449 images, as the evaluation server is not available for instance segmentation.

Cityscapes has 8 “thing” classes, for which we use bounding box annotations, and 11 “stuff” class labels for which we use image-level tags. We train our initial semantic segmentation model with the images for which 19998 coarse and 2975 fine annotations are available. Thereafter, we train our instance segmentation network using the 2975 images with fine annotations available as these have instance ground truth labelled. Details of the multi-label classification network we trained in order to obtain weak localisation cues from image-level tags (Sect. 3.3) are described in the supplementary. When using Grad-CAM, the original authors originally used a threshold of 15% of the maximum value for weak localisation on ImageNet. However, we increased the threshold to 50% to obtain higher precision on this more cluttered dataset.

Network Training. Our underlying segmentation network is a reimplementation of PSPNet [59]. For fair comparison to our weakly-supervised model, we train a fully-supervised model ourselves, using the same training hyperparameters (detailed in the supplementary) instead of using the authors’ public, fully-supervised model. The original PSPNet implementation [59] used a large batch size synchronised over 16 GPUs, as larger batch sizes give better estimates of batch statistics used for batch normalisation [59, 60]. In contrast, our experiments are performed on a single GPU with a batch size of one \(521 \times 521\) image crop. As a small batch size gives noisy estimates of batch statistics, our batch statistics are “frozen” to the values from the ImageNet-pretrained model as common practice [61, 62]. Our instance subnetwork requires object detections, and we train Faster-RCNN [3] for this task. All our networks use a ResNet-101 [1] backbone.

Evaluation Metrics. We use the \(AP^{r}\) metric [37], commonly used in evaluating instance segmentation. It extends the AP, a ranking metric used in object detection [18], to segmentation where a predicted instance is considered correct if its Intersection over Union (IoU) with the ground truth instance is more than a certain threshold. We also report the \(AP^r_{vol}\) which is the mean \(AP^r\) across a range of IoU thresholds. Following the literature, we use a range of 0.1 to 0.9 in increments of 0.1 on VOC, and 0.5 to 0.95 in increments of 0.05 on Cityscapes.

However, as noted by several authors [16, 21, 27, 63], the \(AP^r\) is a ranking metric that does not penalise methods which predict more instances than there actually are in the image as long as they are ranked correctly. Moreover, as it considers each instance independently, it does not penalise overlapping instances. As a result, we also report the Panoptic Quality (PQ) recently proposed by [21],

$$\begin{aligned} {\mathrm {PQ}} = \underbrace{\frac{\sum _{(p, g) \in TP } \text {IoU}(p, g)}{ {\frac{1}{2}}| TP |}}_{\text {Segmentation Quality (SQ)}} \times \underbrace{\frac{| TP |}{| TP | + \frac{1}{2} | FP | + \frac{1}{2} | FN |}}_{\text {Detection Quality (DQ) }} \,, \end{aligned}$$
(5)

where p and g are the predicted and ground truth segments, and \( TP \), \( FP \) and \( FN \) respectively denote the set of true positives, false positives and false negatives.

4.2 Results on Pascal VOC

Tables 1 and 2 show the state-of-art results of our method for semantic- and instance-segmentation respectively. For both semantic- and instance-segmentation, our weakly supervised model obtains about 95% of the performance of its fully-supervised counterpart, emphasising that accurate models can be learned from only bounding box annotations, which are significantly quicker and cheaper to obtain than pixelwise annotations. Table 2 also shows that our weakly-supervised model outperforms some recent fully supervised instance segmentation methods such as [17, 65]. Moreover, our fully-supervised instance segmentation model outperforms all previous work on this dataset. The main difference of our model to [16] is that our network is based on the PSPNet architecture using ResNet-101, whilst [16] used the network of [66] based on VGG [2].

Table 1. Comparison of semantic segmentation performance to recent methods using only weak, bounding-box supervision on Pascal VOC. Note that [11, 12] use the less accurate VGG network, whilst we and [42] use ResNet-101. “FS%” denotes the percentage of fully-supervised performance.
Table 2. Comparison of instance segmentation performance to recent (fully- and weakly-supervised) methods on the VOC 2012 validation set.

We can obtain semantic segmentations from the output of our semantic subnetwork, or from the final instance segmentation (as we produce non-overlapping instances) by taking the union of all instances which have the same semantic label. We find that the IoU obtained from the final instance segmentation, and the initial pretrained semantic subnetwork to be very similar, and report the latter in Table 1. Further qualitative and quantitative results, including success and failure cases, are included in the supplement.

End-to-End Training of Instance Subnetwork. Our instance subnetwork can be trained in a piecewise fashion, or the entire network including the semantic subnetwork can be trained end-to-end. End-to-end training was shown to obtain higher performance by [16] for full supervision. We also observe this effect for weak supervision from bounding box annotations. A weakly supervised model, trained with COCO annotations improves from an \(AP^{r}_{vol}\) of 53.3 to 55.5. When not using COCO for training the initial semantic subnetwork, a slightly higher increase by 3.9 from 51.7 is observed. This emphasises that our training strategy (Sect. 3.1) is effective for both semantic- and instance-segmentation.

Iterative Training. The approximate ground truth used to train our model can also be generated in an iterative manner, as discussed in Sect. 3.4. However, as the results from a single iteration (Tables 1 and 2) are already very close to fully-supervised performance, this offers negligible benefit. Iterative training is, however, crucial for obtaining good results on Cityscapes as discussed in Sect. 4.3.

Semi-supervision. We also consider the case where we have a combination of weak and full annotations. As shown in Table 3, we consider all combinations of weak- and full-supervision of the training data from Pascal VOC and COCO. Table 3 shows that training with fully-supervised data from COCO and weakly-supervised data from VOC performs about the same as weak supervision from both datasets for both semantic- and instance-segmentation. Furthermore, training with fully annotated VOC data and weakly labelled COCO data obtains similar results to full supervision from both datasets. We have qualitatively observed that the annotations in Pascal VOC are of higher quality than those of Microsoft COCO (random samples from both datasets are shown in the supplementary). And this intuition is evident in the fact that there is not much difference between training with weak or full annotations from COCO. This suggests that in the case of segmentation, per-pixel labelling of additional images is not particularly useful if they are not labelled to a high standard, and that labelling fewer images at a higher quality (Pascal VOC) is more beneficial than labelling many images at a lower quality (COCO). This is because Table 3 demonstrates how both semantic- and instance-segmentation networks can be trained to achieve similar performance by using only bounding box labels instead of low-quality segmentation masks. The average annotation time can be considered a proxy for segmentation quality. While a COCO instance took an average of 79 s to segment [19], this figure is not mentioned for Pascal VOC [18, 67].

Table 3. Semantic- and instance-segmentation performance on Pascal VOC with varying levels of supervision from the Pascal and COCO datasets. The former is measured by the IoU, and latter by the \(AP^r_{vol}\) and PQ.
Table 4. Semantic segmentation performance on the Cityscapes validation set. We use more informative, bounding-box annotations for “thing” classes, and this is evident from the higher IoU than on “stuff” classes for which we only have image-level tags.

4.3 Results on Cityscapes

Tables 4 and 5 present, what to our knowledge is, the first weakly supervised results for either semantic or instance segmentation on Cityscapes. Table 4 shows that, as expected for semantic segmentation, our weakly supervised model performs better, relative to the fully-supervised model, for “thing” classes compared to “stuff” classes. This is because we have more informative bounding box labels for “things”, compared to only image-level tags for “stuff”. For semantic segmentation, we obtain about 97% of fully-supervised performance for “things” (similar to our results on Pascal VOC) and 83% for “stuff”. Note that we evaluate images at a single-scale, and higher absolute scores could be obtained by multi-scale ensembling [59, 61].

For instance-level segmentation, the fully-supervised ratios for the PQ are similar to the IoU ratio for semantic segmentation. In Table 5, we report the \(AP^r_{vol}\) and PQ for both thing and stuff classes, assuming that there is only one instance of a “stuff” class in the image if it is present. Here, the \(AP^{r}_{vol}\) for “stuff” classes is higher than that for “things”. This is because there can only be one instance of a “stuff” class, which makes instances easier to detect, particularly for classes such as “road” which typically occupy a large portion of the image. The Cityscapes evaluation server, and previous work on this dataset, only report the \(AP^{r}_{vol}\) for “thing” classes. As a result, we report results for “stuff” classes only on the validation set. Table 5 also compares our results to existing work which produces non-overlapping instances on this dataset, and shows that both our fully- and weakly-supervised models are competitive with recently published work on this dataset. We also include the results of our fully-supervised model, initialised from the public PSPNet model [59] released by the authors, and show that this is competitive with the state-of-art [30] among methods producing non-overlapping segmentations (note that [30] also uses the same PSPNet model). Figure 7 shows some predictions of our weakly supervised model; further results are in the supplementary.

Table 5. Instance-level segmentation results on Cityscapes. On the validation set, we report results for both “thing” (th.) and “stuff” (st.) classes. The online server, which evaluates the test set, only computes the \(AP^{r}\) for “thing” classes. We compare to other fully-supervised methods which produce non-overlapping instances. To our knowledge, no published work has evaluated on both “thing” and “stuff” classes. Our fully supervised model, initialised from the public PSPNet model [59] is equivalent to our previous work [16], and competitive with the state-of-art. Note that we cannot use the public PSPNet pretrained model in a weakly-supervised setting.

Iterative Training. Iteratively refining our approximate ground truth during training, as described in Sect. 3.4, greatly improves our performance on both semantic- and instance-segmentation as shown in Fig. 6. We trained the network for 150 000 iterations before regenerating the approximate ground truth using the network’s own output on the training set. Unlike on Pascal VOC, iterative training is necessary to obtain good performance on Cityscapes as the approximate ground truth generated on the first iteration is not sufficient to obtain high accuracy. This was expected for “stuff” classes, since we began from weak localisation cues derived from the image-level tags. However, as shown in Fig. 6, “thing” classes also improved substantially with iterative training, unlike on Pascal VOC where there was no difference. Compared to VOC, Cityscapes is a more cluttered dataset, and has large scale variations as the distance of an object from the car-mounted camera changes. These dataset differences may explain why the image priors employed by the methods we used (GrabCut [48] and MCG [49]) to obtain approximate ground truth annotations from bounding boxes are less effective. Furthermore, in contrast to Pascal VOC, Cityscapes has frequent co-occurrences of the same objects in many different images, making it more challenging for weakly supervised methods.

Fig. 6.
figure 6

Iteratively refining our approximate ground truth during training improves both semantic and instance segmentation on the Cityscapes validation set.

Effect of Ranking Methods on the \(AP^{r}\). The \(AP^{r}\) metric is a ranking metric derived from object detection. It thus requires predicted instances to be scored such that they are ranked in the correct relative order. As our network uses object detections as an additional input and each detection represents a possible instance, we set the score of a predicted instance to be equal to the object detection score. For the case of stuff classes, which object detectors are not trained for, we use a constant detection score of 1 as described in Sect. 3.5. An alternative to using a constant score for “stuff” classes is to take the mean of the softmax-probability of all pixels within the segmentation mask. Table 6 shows that this latter method improves the \(AP^r\) for stuff classes. For “things”, ranking with the detection score performs better and comes closer to oracle performance which is the maximum \(AP^r\) that could be obtained with the predicted instances.

Table 6. The effect of different instance ranking methods on the \(AP^{r}_{vol}\) of our weakly supervised model computed on the Cityscapes validation set.
Fig. 7.
figure 7

Example results on Cityscapes of our weakly supervised model.

Changing the score of a segmented instance does not change the quality of the actual segmentation, but does impact the \(AP^r\) greatly as shown in Table 6. The PQ, which does not use scores, is unaffected by different ranking methods, and this suggests that it is a better metric for evaluating non-overlapping instance segmentation where each pixel in the image is explained.

5 Conclusion and Future Work

We have presented, to our knowledge, the first weakly-supervised method that jointly produces non-overlapping instance and semantic segmentation for both “thing” and “stuff” classes. Using only bounding boxes, we are able to achieve 95% of state-of-art fully-supervised performance on Pascal VOC. On Cityscapes, we use image-level annotations for “stuff” classes and obtain 88.8% of fully-supervised performance for semantic segmentation and 85.6% for instance segmentation (measured with the PQ). Crucially, the weak annotations we use incur only about 3% of the time of full labelling. As annotating pixel-level segmentation is time consuming, there is a dilemma between labelling few images with high quality or many images with low quality. Our semi-supervised experiment suggests that the latter is not an effective use of annotation budgets as similar performance can be obtained from only bounding-box annotations.

Future work is to perform instance segmentation using only image-level tags and the number of instances of each object present in the image as supervision. This will require a network architecture that does not use object detections as an additional input.