Keywords

1 Introduction

Semantic segmentation of an image refers to the challenging task of assigning each pixel a categorical label, e.g., motorcycle or person. Segmentation performance is often measured in a pixel-wise fashion, in terms of mean Intersection over Union (mIoU) across categories between the ground-truth (Fig. 1b) and the predicted label map (Fig. 1c).

Fig. 1.
figure 1

We propose new pairwise pixel loss functions that capture the spatial structure of segmentation. Given an image (a), the task is to predict the ground-truth labeling (b). When a deep neural net is trained with conventional softmax cross-entropy loss on individual pixels, the predicted segmentation (c) is often based on visual appearance and oblivious of the spatial structure of each semantic class. Our work imposes an additional pairwise pixel label affinity loss (d), matching the label relations among neighouring pixels between the prediction and the ground-truth. We also learn the neighbourhood size for each semantic class, and our adaptive affinity fields result (e) picks out both large bicycle shields and thin spokes of round wheels.

Much progress has been made on segmentation with convolutional neural nets (CNN), mostly due to increasingly powerful pixel-wise classifiers, e.g.,VGG-16 [21, 32] and ResNet [14, 33], with the convolutional filters optimized by minimizing the average pixel-wise classification error over the image.

Even with big training data and with deeper and more complex network architectures, pixel-wise classification based approaches fundamentally lack the spatial discrimination power when foreground pixels and background pixels are close or mixed together: Segmentation is poor when the visual evidence for the foreground is weak, e.g.,glass motorcycle shields, or when the spatial structure is small, e.g.,thin radial spokes of all the wheels (Fig. 1c).

There have been two main lines of efforts at incorporating structural reasoning into semantic segmentation: Conditional Random Field (CRF) methods [15, 37] and Generative Adversarial Network (GAN) methods [12, 22].

  1. 1.

    CRF enforces label consistency between pixels measured by the similarity in visual appearance (e.g.,raw pixel value). An optimal labeling is solved via message passing algorithms [8, 20]. CRF is employed either as a post-processing step [6, 15], or as a plug-in module inside deep neural networks [19, 37]. Aside from its time-consuming iterative inference routine, CRF is also sensitive to visual appearance changes.

  2. 2.

    GAN is a recent alternative for imposing structural regularity in the neural network output. Specifically, the predicted label map is tested by a discriminator network on whether it resembles ground truth label maps in the training set. GAN is notoriously hard to train, particularly prone to model instability and mode collapses [27].

We propose a simpler approach, by learning to verify the spatial structure of segmentation during training only. Instead of enforcing semantic labels on individual pixels and matching labels between neighbouring pixels using CRF or GAN, we propose the concept of Adaptive Affinity Fields (AAF) to capture and match the relations between neighbouring pixels in the label space. How the semantic label of each pixel is related to those of neighboring pixels, e.g.,whether they are same or different, provides a distributed and pixel-centric description of semantic relations in the space and collectively they describe Motorcycle wheels are round with thin radial spokes. We develop new affinity field matching loss functions to learn a CNN that automatically outputs a segmentation respectful of spatial structures and small details.

The pairwise pixel affinity idea has deep roots in perceptual organization, where local affinity fields have been used to characterize the intrinsic geometric structures in early vision [26], the grouping cues between pixels for image segmentation via spectral graph partitioning [31], and the object hypothesis for non-additive score verification in object recognition at the run time [1].

Technically, affinity fields at different neighbourhood sizes encode structural relations at different ranges. Matching the affinity fields at a fixed size would not work well for all semantic categories, e.g.,thin structures are needed for persons seen at a distance whereas large structures are for cows seen close-up.

One straightforward solution is to search over a list of possible affinity field sizes, and pick the one that yields the minimal affinity matching loss. However, such a practice would result in selecting trivial sizes which are readily satisfied. For example, for large uniform semantic regions, the optimal affinity field size would be the smallest neighbourhood size of 1, and any pixel-wise classification would already get them right without any additional loss terms in the label space.

We propose adversarial learning for size-adapted affinity field matching. Intuitively, we select the right size by pushing the affinity field matching with different sizes to the extreme: Minimizing the affinity loss should be hard enough to have a real impact on learning, yet it should still be easy enough for the network to actually improve segmentation towards the ground-truth, i.e.,a best worst-case learning scenario. Specifically, we formulate our AAF as a minimax problem where we simultaneously maximize the affinity errors over multiple kernel sizes and minimize the overall matching loss. Consequently, our adversarial network learns to assign a smaller affinity field size to person than to cow, as the person category contains finer structures than the cow category.

Our AAF has a few appealing properties over existing approaches (Table 1).

Table 1. Key differences between our method and other popular structure modeling approaches, namely CRF [15] and GAN [12]. The performance (% mIoU) is reported with PSPNet [36] architecture on the Cityscapes [10] validation set.
  1. 1.

    It provides a versatile representation that encodes spatial structural information in distributed, pixel-centric relations.

  2. 2.

    It is easier to train than GAN and more efficient than CRF, as AAF only impacts network learning during training, requiring no extra parameters or inference processes during testing.

  3. 3.

    It is more generalizable to visual domain changes, as AAF operates on the label relations not on the pixel values, capturing desired intrinsic geometric regularities despite of visual appearance variations.

We demonstrate its effectiveness and efficiency with extensive evaluations on Cityscapes [10] and PASCAL VOC 2012 [11] datasets, along with its remarkable generalization performance when our learned networks are applied to the GTA5 dataset  [28].

2 Related Works

Most methods treat semantic segmentation as a pixel-wise classification task, and those that model structural correlations provide a small gain at a large computational cost.

Semantic Segmentation. Since the introduction of fully convolutional networks for semantic segmentation  [21], deeper [16, 33, 36] and wider [25, 29, 34] network architectures have been explored, drastically improving the performance on benchmarks such as PASCAL VOC [11]. For example, Wu et al.[33] achieved higher segmentation accuracy by replacing backbone networks with more powerful ResNet [14], whereas Yu et al.[34] tackled fine-detailed segmentation using atrous convolutions. While the performance gain in terms of mIoU is impressive, these pixel-wise classification based approaches fundamentally lack the spatial discrimination power when foreground and background pixels are close or mixed together, resulting in unnatural artifacts in Fig. 1c.

Structure Modeling. Image segmentation has highly correlated outputs among the pixels. Formulating it as an independent pixel labeling problem not only makes the pixel-level classification unnecessarily hard, but also leads to artifacts and spatially incoherent results. Several ways to incorporate structure information into segmentation have been investigated [4, 8, 15, 17, 19, 24, 37]. For example, Chen et al.[6] utilized denseCRF [15] as post-processing to refine the final segmentation results. Zheng et al.[37] and Liu et al.[19] further made the CRF module differentiable within the deep neural network. Pairwise low-level image cues, such as grouping affinity [18, 23] and contour cues [3, 5], have also been used to encode structures. However, these methods are sensitive to visual appearance changes, or require expensive iterative inference procedures.

Our work provides another perspective to structure modeling by matching the relations between neighbouring pixels in the label space. Our segmentation network learns to verify the spatial structure of segmentation only during training; once it is trained, it is ready for deployment without run-time inference.

3 Our Approach: Adaptive Affinity Fields

We first briefly revisit the classic pixel-wise cross-entropy loss commonly used in semantic segmentation. The drawbacks of pixel-wise supervision lead to our concept of region-wise supervision. We then describe our region-wise supervision through affinity fields, and introduce an adversarial process that learns an adaptive affinity kernel size for each category. We summarize the overall AAF architecture in Fig. 2.

Fig. 2.
figure 2

Method overview: Learning semantic segmentation with adaptive affinity fields. The adaptive affinity fields consist of two parts: the affinity field loss with multiple kernel sizes and corresponding categorical adversarial weightings. Note that the adaptive affinity fields are only introduced during training and there is no extra computation during inference.

3.1 From Pixel-Wise Supervision to Region-Wise Supervision

Pixel-wise cross-entropy loss is most often used in CNNs for semantic segmentation [6, 21]. It penalizes pixel-wise predictions independently and is known as a form of unary supervision. It implicitly assumes that the relationships between pixels can be learned as the effective receptive field increases with deeper layers. Given predicted categorical probability \(\hat{y}_i(l)\) at pixel i w.r.t.its ground truth categorical label l, the total loss is the average of cross-entropy loss at pixel i:

$$\begin{aligned} \mathcal {L}_\text {unary}^i = \mathcal {L}_\text {cross-entropy}^i = -\log \hat{y}_i(l). \end{aligned}$$
(1)

Such a unary loss does not take the semantic label correlation and scene structure into account. The objects in different categories interact with each other in a certain pattern. For example, cars are usually on the road while pedestrians on the sidewalk; buildings are surrounded by the sky but never on top of it. Also, some shapes of a certain category occur more frequently, such as rectangles in trains, circles in bikes, and straight vertical lines in poles. This kind of inter-class and inner-class pixel relationships are informative and can be integrated into learning as structure reasoning. We are thus inspired to propose an additional region-wise loss to impose penalties on inconsistent unary predictions and encourage the network to learn such intrinsic pixel relationships.

Region-wise supervision extends its pixel-wise counterpart from independent pixels to neighborhoods of pixels, i.e.,, the region-wise loss considers a patch of predictions and ground truth jointly. Such region-wise supervision \(\mathcal {L}_\text {region}\) involves designing a specific loss function for a patch of predictions \(\mathcal {N}(\hat{y_i})\) and corresponding patch of ground truth \(\mathcal {N}({y_i})\) centered at pixel i, where \(\mathcal {N}(\cdot )\) denotes the neighborhood.

The overall objective is hence to minimize the combination of unary and region losses, balanced by a constant \(\lambda \):

$$\begin{aligned} S^* = \mathop {\text {argmin}\,}\limits _S \mathcal {L} = \mathop {\text {argmin}\,}\limits _S \frac{1}{n}\sum _i\Big (\mathcal {L}_\text {unary}^i(\hat{y_i}, y_i) + \lambda \mathcal {L}_\text {region}^i\big (\mathcal {N}(\hat{y_i}), \mathcal {N}(y_i)\big ) \Big ), \end{aligned}$$
(2)

where n is the total number of pixels. We omit index i and averaging notations for simplicity in the rest of the paper.

The benefits of the addition of region-wise supervision have been explored in previous works. For example, Luc et al.[22] exploited GAN [12] as structural priors, and Mostajabi et al.[24] pre-trained an additional auto-encoder to inject structure priors into training the segmentation network. However, their approaches require much hyper-parameter tuning and are prone to overfitting, resulting in very small gains over strong baseline models. Please see Table 1 for a comparison.

3.2 Affinity Field Loss Function

Our affinity field loss function overcome these drawbacks and is a flexible region-wise supervision approach that is also easy to optimize.

The use of pairwise pixel affinity has a long history in image segmentation  [31, 35]. The grouping relationships between neighbouring pixels are derived from the image and represented by a graph, where a node denotes a pixel and a weighted edge between two nodes captures the similarity between two pixels. Image segmentation then becomes a graph partitioning problem, where all the nodes are divided into disjoint sets, with maximal weighted edges within the sets and minimal weighted edges between the sets.

We define pairwise pixel affinity based not on the image, but on ground-truth label map. There are two types of label relationships between a pair of pixels: whether their labels are the same or different. If pixel i and its neighbor j have the same categorical label, we impose a grouping force which encourages network predictions at i and j to be similar. Otherwise, we impose a separating force which pushes apart their label predictions. These two forces are illustrated in Fig. 3 left.

Specifically, we define a pairwise affinity loss based on KL divergence between binary classification probabilities, consistent with the cross-entropy loss for the unary label prediction term. For pixel i and its neighbour j, depending on whether two pixels belong to the same category c in the ground-truth label map y, we define a non-boundary term \(\mathcal {L}_{\text {affinity}}^{i\bar{b}c}\) for the grouping force and an boundary term \(\mathcal {L}_{\text {affinity}}^{ibc}\) for the separating force in the prediction map \(\hat{y}\):

$$\begin{aligned} \mathcal {L}_{\text {affinity}}^{ic} = {\left\{ \begin{array}{ll} \mathcal {L}_{\text {affinity}}^{i\bar{b}c} = D_{KL}(\hat{y}_j(c)|| \hat{y}_i(c)) &{} \text {if } y_i(c) = y_j(c) \\ \mathcal {L}_{\text {affinity}}^{ibc} = \max \{0, m - D_{KL}(\hat{y}_j(c) || \hat{y}_i(c))\} &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(3)

\(D_{KL}(\cdot )\) is the Kullback-Leibler divergence between two Bernoulli distributions P and Q with parameters p and q respectively: \(D_{KL}(P||Q)= p\log \frac{p}{q}+\bar{p}\log \frac{\bar{p}}{\bar{q}}\) for the binary distribution \([p,1-p]\) and \([q,1-q]\), where \(p,q \in [0,1]\). For simplicity, we abbreviate the notation as \(D_{KL}(p||q)\). \(\hat{y}_j(c)\) denotes the prediction probability of j in class c. The overall loss is the average of \(\mathcal {L}_{\text {affinity}}^{ic}\) over all categories and pixels.

Discussion 1. Our affinity loss encourages similar network predictions on two pixels of the same ground-truth label, regardless of what their actual labels are. The collection of such pairwise bonds inside a segment ensure that all the pixels achieve the same label. On the other hand, our affinity loss pushes network predictions apart on two pixels of different ground-truth labels, again regardless of what their actual labels are. The collection of such pairwise repulsion help create clear segmentation boundaries.

Discussion 2. Our affinity loss may appear similar to CRF [15] on the pairwise grouping or separating forces between pixels. However, a crucial difference is that CRF models require iterative inference to find a solution, whereas our affinity loss only impacts the network training with pairwise supervision. A similar perspective is metric learning with contrastive loss [9], commonly used in face identification tasks. Our affinity loss works better for segmentation tasks, because it penalizes the network predictions directly, and our pairwise supervision is in addition to and consistent with the conventional unary supervision.

Fig. 3.
figure 3

Left: Our affinity field loss separates predicted probabilities across the boundary and unifies them within the segment. Right: The affinity fields can be defined over multiple ranges. Minimizing the affinity loss over different ranges results in trivial solutions which are readily satisfied. Our size-adaptive affinity field loss is achieved with adversarial learning: Maximizing the affinity loss over different kernel sizes selects the most critical range for imposing pairwise relationships in the label space, and our goal is to minimize this maximal loss – i.e., use the best worst case scenario for most effective training.

3.3 Adaptive Kernel Sizes from Adversarial Learning

Region-wise supervision often requires a preset kernel size for CNNs, where pairwise pixel relationships are measured in the same fashion across all pixel locations. However, we cannot expect one kernel size fits all categories, since the ideal kernel size for each category varies with the average object size and the object shape complexity.

We propose a size-adaptive affinity field loss function, optimizing the weights over a set of affinity field sizes for each category in the loop:

$$\begin{aligned} \mathcal {L}_\text {multiscale} = \sum _c \sum _k w_{ck} \mathcal {L}_\text {region}^{ck} \quad \text {s.t. } \sum _k w_{ck} = 1 \quad \text {and}\quad w_{ck} \ge 0 \end{aligned}$$
(4)

where \(\mathcal {L}_\text {region}^{ck}\) is a region loss defined in Eq. (2), yet operating on a specific class channel c with kernel size \(k\times k\) with a corresponding weighting \(w_{ck}\).

If we just minimize the affinity loss with size weighting w included, w would likely fall into a trivial solution. As illustrated in Fig. 3 right, the affinity loss would be minimum if the smallest kernels are highly weighted for non-boundary terms and the largest kernels for boundary terms, since nearby pixels are more likely to belong to the same object and far-away pixels to different objects. Unary predictions based on the image would naturally have such statistics, nullifying any potential effect from our pairwise affinity supervision.

To optimize the size weighting without trivializing the affinity loss, we need to push the selection of kernel sizes to the extreme. Intuitively, we need to enforce pixels in the same segment to have the same label prediction as far as possible, and likewise to enforce pixels in different segments to have different predictions as close as possible. We use the best worst case scenario for most effective training.

We formulate the adaptive kernel size selection process as optimizing a two-player minimax game: While the segmenter should always attempt to minimize the total loss, the weighting for different kernel sizes in the loss should attempt to maximize the total loss in order to capture the most critical neighbourhood sizes. Formally, we have:

$$\begin{aligned} S^* = \mathop {\text {argmin}\,}\limits _S \max _w \mathcal {L}_\text {unary} + \mathcal {L}_\text {multiscale}. \end{aligned}$$
(5)

For our size-adaptive affinity field learning, we separate the non-boundary term \(\mathcal {L}_\text {affinity}^{\bar{b}ck}\) and boundary term \(\mathcal {L}_\text {affinity}^{bck}\) in Eq. (3) since their ideal kernel sizes would be different. Our adaptive affinity field (AAF) loss becomes:

$$\begin{aligned} S^*&= \mathop {\text {argmin}\,}\limits _S \max _w \mathcal {L}_\text {unary} + \mathcal {L}_\text {AAF},\\ \mathcal {L}_\text {AAF}&= \sum _c \sum _k (w_{\bar{b}ck} \mathcal {L}_\text {affinity}^{\bar{b}ck} + w_{bck} \mathcal {L}_\text {affinity}^{bck}),\end{aligned}$$
(6)
$$\begin{aligned} \text {s.t. } \sum _k w_{\bar{b}ck}&= \sum _k w_{bck}= 1 \text {\ and \ } w_{\bar{b}ck},w_{bck} \ge 0.\nonumber \end{aligned}$$
(7)

4 Experimental Setup

4.1 Datasets

We compare our proposed affinity fields and AAF with other competing methods on the PASCAL VOC 2012 [11] and Cityscapes [10] datasets.

PASCAL VOC 2012. PASCAL VOC 2012 [11] segmentation dataset contains 20 object categories and one background class. Following the procedure of [6, 21, 36], we use augmented data with the annotations of [13], resulting in 10,582, 1,449, and 1,456 images for training, validation and testing.

Cityscapes. Cityscapes [10] is a dataset for semantic urban street scene understanding. 5000 high quality pixel-level finely annotated images are divided into training, validation, and testing sets with 2975, 500, and 1525 images, respectively. It defines 19 categories containing flat, human, vehicle, construction, object, nature, etc.

4.2 Evaluation Metrics

All existing semantic segmentation works adopt pixel-wise mIoU [21] as their metric. To fully examine the effectiveness of our AAF on fine structures in particular, we also evaluate all the models using instance-wise mIoU and boundary detection metrics.

Instance-Wise mIoU. Since the pixel-wise mIoU metric is often biased toward large objects, we introduce the instance-wise mIoU to alleviate the bias, which allow us to evaluate fairly the performance on smaller objects. The per category instance-wise mIoU is formulated as \(\hat{U}_c = \frac{\sum _x n_{c,x} \times U_{c,x}}{\sum _x n_{c,x}},\) where \(n_{c,x}\) and \(U_{c,x}\) are the number of instances and IoU of class c in image x, respectively.

Boundary Detection Metrics. We compute semantic boundaries using the semantic predictions and benchmark the results using the standard benchmark for contour detection proposed by [2], which summarizes the results by precision, recall, and f-measure.

4.3 Methods of Comparison

We briefly describe other popular methods that are used for comparison in our experiments, namely, GAN’s adversarial learning [12], contrastive loss [9], and CRF [15].

GAN’s Adversarial Learning. We investigate a popular framework, the Generative Adversarial Networks (GAN) [12]. The discriminator D in GAN works as injecting priors for region structures. The adversarial loss is formulated as

$$\begin{aligned} \mathcal {L}_{\text {adversarial}}^i = \log D(\mathcal {N}(y_i)) + \log (1-D(\mathcal {N}(\hat{y_i}))). \end{aligned}$$
(8)

We simultaneously train the segmentation network S to minimize log \((1-D(\mathcal {N}(\hat{y_i})))\) and the discriminator to maximize \(\mathcal {L}_{\text {adversarial}}\).

Pixel Embedding. We study the region-wise supervision over feature map, which is implemented by imposing the contrastive loss [9] on the last convolutional layer before the softmax layer. The contrastive loss is formulated as

$$\begin{aligned} \mathcal {L}_{\text {contrast}}^{i} = {\left\{ \begin{array}{ll} \mathcal {L}_{\text {contrast}}^{i\bar{e}} = \Vert f_j-f_i\Vert ^2_2 &{} \text {if } y_i(c) = y_j(c) \\ \mathcal {L}_{\text {contrast}}^{ie} = \max \{0, m - \Vert f_j-f_i\Vert ^2_2\} &{} \text {otherwise,} \end{array}\right. } \end{aligned}$$
(9)

where \(f_i\) denotes \(L_2\)-normalized feature vector at pixel i, and m is set to 0.2.

CRF-Based Processing. We follow  [6]’s implementation by post-processing the prediction with dense-CRF [15]. We set \(bi\_w\) to 1, \(bi\_xy\_std\) to 40, \(bi\_rgb\_std\) to 3, \(pos\_w\) to 1, and \(pos\_xy\_std\) to 1 for all experiments. It is worth mentioning that CRF takes additional 40 s to generate the final results on Cityscapes, while our proposed methods introduce no inference overhead.

4.4 Implementation Details

Our implementation follows the ones of base architectures, which are PSPNet [36] in most cases or FCN [21]. We use the poly learning rate policy where the current learning rate equals the base one multiplied by \((1-\frac{\text {iter}}{\text {max\_iter}})^{0.9}\). We set the base learning rate as 0.001. The training iterations for all experiments is 30 K on VOC dataset and 90 K on Cityscapes dataset while the performance can be further improved by increasing the iteration number. Momentum and weight decay are set to 0.9 and 0.0005, respectively. For data augmentation, we adopt random mirroring and random resizing between 0.5 and 2 for all datasets. We do not upscale the logits (prediction map) back to the input image resolution, instead, we follow [6]’s setting by downsampling the ground-truth labels for training (\(output\_stride=8\)).

PSPNet [36] shows that larger “cropsize” and “batchsize” can yield better performance. In their implementation, “cropsize” can be up to \(720\times 720\) and “batchsize” to 16 using 16 GPUs. To speed up the experiments for validation on VOC, we downsize “cropsize” to \(336\times 336\) and “batchsize” to 8 so that a single GTX Titan X GPU is sufficient for training. We set “cropsize” to \(480\times 480\) during inference. For testing on PASCAL VOC 2012 and all experiments on Cityscapes dataset, we use 4-GPUs to train the network. On VOC dataset, we set the “batchsize” to 16 and set “cropsize” to \(480 \times 480\). On Cityscaeps, we set the “batchsize” to 8 and “cropsize” to \(720 \times 720\). For inference, we boost the performance by averaging scores from left-right flipped and multi-scale inputs (\(scales = \{0.5,0.75,1,1.25,1.5,1.75\}\)).

For affinity fields and AAF, \(\lambda \) is set to 1.0 and margin m is set to 3.0. We use ResNet101 [14] as the backbone network and initialize the models with weights pre-trained on ImageNet [30].

Table 2. Per-class results on Pascal VOC 2012 validation set. Gray colored background denotes using FCN as the base architecture.
Table 3. Per-class results on Cityscapes validation set. Gray colored background denotes using FCN as the base architecture.

5 Experimental Results

We benchmark our proposed methods on two datasets, PASCAL VOC 2012 [11] and Cityscapes [10]. All methods are evaluated by three metrics: mIoU, instance-wise mIoU and boundary detection recall. We include some visual examples to demonstrate the effectiveness of our proposed methods in Fig. 5.

5.1 Pixel-Level Evaluation

Validation Results. For training on PASCAL VOC 2012 [11], we first train on \(train\_aug\) for 30 K iterations and then fine-tune on train for another 30 K iterations with base learning rate as 0.0001. For Cityscapes [10], we only train on finely annotated images for 90 K iterations. We summarize the mIoU results on validation set in Tables 2 and 3, respectively.

With FCN [21] as base architecture, the affinity field loss and AAF improve the performance by \(2.16\%\) and \(3.04\%\) on VOC and by \(1.88\%\) and \(2.37\%\) on Cityscapes. With PSPNet [36] as the base architecture, the results also improves consistently: GAN loss, embedding contrastive loss, affinity field loss and AAF improve the mean IoU by \(0.62\%\), \(1.24\%\), \(1.68\%\) and \(2.27\%\) on VOC; affinity field loss and AAF improve by \(2.00\%\) and \(2.52\%\) on Cityscapes. It is worth noting that large improvements over PSPNet on VOC are mostly in categories with fine structures, such as “bike”, “chair”, “person”, and “plant”.

Testing Results. On PASCAL VOC 2012, the training procedure for PSPNet and AAF is the same as follows: We first train the networks on \(train\_aug\) and then fine-tune on \(train\_val\). We report the testing results on VOC 2012 and Cityscapes in Tables 4 and 5, respectively. Our re-trained PSPnet does not reach the same performance as originally reported in the paper because we do not bootstrap the performance by fine-tuning on hard examples (like “bike” images), as pointed out in [7]. We demonstrate that our proposed AAF achieve \(82.17\%\) and \(79.07\%\) mIoU, which is better than the PSPNet by \(1.54\%\) and \(2.77\%\) and competitive to the state-of-the-art performance.

Table 4. Per-class results on Pascal VOC 2012 testing set.
Table 5. Per-class results on Cityscapes test set.

5.2 Instance-Level Evaluation

We measure the instance-wise mIoU on VOC and Cityscapes validation set as summarized in Tables 6 and 7, respectively In instance-wise mIoU, our AAF is higher than base architecture by \(3.94\%\) on VOC and \(2.94\%\) on Cityscapes. The improvements on fine-structured categories are more prominent. For example, the “bottle” is improved by \(12.89\%\) on VOC, “pole” and “tlight” is improved by \(9.51\%\) and \(9.04\%\) on Cityscapes.

Table 6. Per-class instance-wise IoU results on Pascal VOC 2012 validation set.
Table 7. Per-class instance-wise IOU results on Cityscapes validation set.

5.3 Boundary-Level Evaluation

Next, we analyze quantitatively the improvements of boundary localization. We include the boundary recall on VOC in Table 8 and Cityscapes in Table 9. We omit the precision table due to smaller performance difference. The overall boundary recall is improved by \(7.9\%\) and \(8.0\%\) on VOC and Cityscapes, respectively. It is worth noting that the boundary recall is improved for every category. This result demonstrates that boundaries of all categories can all benefit from affinity fields and AAF. Among all, the improvements on categories with complicated boundaries, such as “bike”, “bird”, “boat”, “chair”, “person”, and “plant” are significant on VOC. On Cityscapes, objects with thin structures are improved most, such as “pole”, “tlight”, “tsign”, “person”, “rider”, and “bike”.

Table 8. Per-class boundary recall results on Pascal VOC 2012 validation set.
Table 9. Per-class boundary recall results on Cityscapes validation set.

5.4 Adaptive Affinity Field Size Analysis

We further analyze our proposed AAF methods on: (1) optimal affinity field size for each category, and (2) effective combinations of affinity field sizes.

Optimal Adaptive Affinity Field Size. We conduct experiments on VOC with our proposed AAF on three \(k \times k\) kernel sizes where \(k=3,5,7\). We report the optimal adaptive kernel size on the contour term calculated as \(k^{e}_c=\sum _{k} w_{eck} \times k\), and summarized in Fig. 4. As shown, “person” and “dog” benefit from smaller kernel size (3.1 and 3.4), while “cow” and “plant” from larger kernel size (4.6 and 4.5). We display some image patches with the corresponding effective receptive field size.

Fig. 4.
figure 4

Left: The optimal weightings for different kernel sizes of the edge term in AAF for each category on PASCAL VOC 2012 validation set. Right: Visualization of image patches with corresponding effective receptive field sizes, suggesting how kernel sizes capture the shape complexity in critical regions of different categories.

Combinations of Affinity Field Sizes. We explore the effectiveness of different selections of \(k \times k\) kernels, where \(k \in \{3,5,7\}\), for AAF. Summarized in Table 10, we observe that combinations of \(3 \times 3\) and \(5 \times 5\) kernels have the optimal performance.

Table 10. Per-category IOU results of AAF with different combinations of kernel sizes k on VOC 2012 validation set. ‘\(\checkmark \)’ denotes the inclusion of respective kernel size as opposed to ‘\(\times \)’.
Fig. 5.
figure 5

Visual quality comparisons on the VOC 2012 [11] validation set (the first four rows), Cityscapes [10] validation set (the middle two rows) and GTA5 [28] part 1 (the bottom row): (a) image, (b) ground truth, (c) PSPNet [36], (d) affinity fields, and (e) adaptive affinity fields (AAF).

5.5 Generalizability

We further investigate the robustness of our proposed methods on different domains. We train the networks on the Cityscapes dataset [10] and test them on another dataset, Grand Theft Auto V (GTA5) [28] as shown in Fig. 5. The GTA5 dataset is generated from the photo-realistic computer game–Grand Theft Auto V [28], which consists of 24,966 images with densely labelled segmentation maps compatible with Cityscapes. We test on GTA5 Part 1 (2,500 images). We summarize the performance in Table 11. It is shown that without fine-tuning, our proposed AAF outperforms the PSPNet [36] baseline model by \(9.5\%\) in mean pixel accuracy and \(1.46\%\) in mIoU, which demonstrates the robustness of our proposed methods against appearance variations.

Table 11. Per-class results on GTA5 Part 1.

6 Summary

We propose adaptive affinity fields (AAF) for semantic segmentation, which incorporate geometric regularities into segmentation models, and learn local relations with adaptive ranges through adversarial training. Compared to other alternatives, our AAF model is (1) effective (encoding rich structural relations), (2) efficient (introducing no inference overhead), and (3) robust (not sensitive to domain changes). Our approach achieves competitive performance on standard benchmarks and also generalizes well on unseen data. It provides a novel perspective towards structure modeling in deep learning.