Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Visual scene understanding from movable platforms has been gaining increased attention due to the active development of autonomous systems and vehicles. Semantic segmentation and dense motion estimation are two core components for recognizing the surrounding environment and analyzing the motion of entities in the scene. The performance of techniques in both areas has been steadily increasing, reported and fueled by public benchmarks (e.g., KITTI [9], MPI Sintel [4], or Cityscapes [6]). Along with the increasing popularity and importance of the two areas, there has been a recent trend in the literature considering how to bridge the two themes and analyzing which benefits these tasks can additionally derive from one another.

There have been a few basic attempts to utilize optical flow to enforce temporal consistency of semantic segmentation in a video sequence [5, 10, 22]. Also, segmenting the scene into superpixels (without clear semantics) has been shown to help estimating more accurate optical flow, assuming that object boundaries may give rise to motion boundaries [30, 36, 37]. Strictly speaking, however, previous work so far simply uses the results from one task as supplementary information for the other, and there have not been many attempts to relate the two tasks more closely or to solve them jointly. Yet, off-the-shelf motion estimation algorithms are not accurate enough to fully rely on [5, 22]. The only exception is very recent work that uses both semantic information and segmentation to increase the accuracy of optical flow [27], however without considering the benefits of temporal correspondence for semantic labeling.

Fig. 1.
figure 1

Overview of our approach. The red region contributes to estimating optical flow, and the blue region ensures temporal consistency of the semantic segmentation, both given two frames. The overlapping region defines the output of our method. (Color figure online)

In this paper, we address this gap and present an approach for joint optical flow estimation and temporally consistent semantic segmentation from monocular video, in which both tasks leverage each other. Figure 1 shows the overview of our method. We begin by assuming that a bottom-up semantic segmentation for each frame is given. Then we estimate accurate optical flow fields by exploiting the semantic information from the given semantic segmentation. The benefit of semantic labels is that they can give us information on the likely physical motion of the associated pixels. At the same time, accurate pixel-level correspondence between consecutive frames can establish temporally consistent semantic segmentations and help refining the initial results.

We make two major contributions. First, we introduce an accurate piecewise parametric optical flow formulation, which itself already outperforms the state of the art, particularly on dynamic objects. Our formulation explicitly handles occlusions to prevent the data term from unduly influencing the results in occlusion areas. As a result, our method additionally provides occlusion information such as occlusion masks and occlusion types. Our second contribution is to jointly estimate optical flow and temporally consistent semantic segmentation in a monocular video setting. For the flow estimation, we additionally apply the epipolar constraint for pixels that should be consistent with the camera ego-motion, as inferred by the semantic information. At the same time, accurately estimated flow helps to enforce temporal consistency on the semantic segmentation. We effectively realize these ideas in our joint formulation and make them feasible using inference based on patch-match belief propagation (PMBP) [2].

Our experiments on the popular KITTI dataset show that our method yields state-of-the-art results for optical flow. For estimating flows on dynamic foreground objects, which are particularly crucial from an autonomous navigation standpoint, our method outperforms all published optical flow algorithms in the benchmark by a significant margin.

2 Related Work

Piecewise Parametric Flow Estimation. Piecewise parametric approaches using a homography model have recently shown promising results on standard benchmarks [4, 9] for motion estimation. Representing the scene as a set of planar surfaces significantly reduces the number of unknowns; at the same time, parametrizing the motion of surfaces by 9-DoF or 8-DoF transforms ensures sufficient diversity and generality of their motion [12, 20, 3133, 38]. In the stereo setting, Vogel et al. [3133] proposed a scene representation consisting of piecewise 3D planes undergoing 3D rigid motion and demonstrate the most accurate results to date for estimating the 3D scene flow on the KITTI benchmark.

On the other hand, the monocular case with its limited amount of data (i.e., two consecutive images) makes the problem more challenging, hence the type of regularization becomes much more important [12, 38]. Hornacek et al. [12] introduced a 9-DoF plane-induced model for optical flow via continuous optimization. Their method shows its strength on rigid motions, but is weaker on poorly textured regions because of the lack of global support. Yang and Li  [38] instead use a 8-DoF homography motion in 2D space with adaptive size and shape of the pieces via discrete optimization.

Our approach also relies on an 8-DoF parameterization, which we found to yield accurate optical flow estimates in practice.

Epipolar Constraint-Based Flow Estimation. Several approaches have relied on the epipolar constraint for estimating motion [1]. Strictly enforcing the constraint gives the benefit of reducing the search space significantly, but causes an inherent limitation for handling independently moving objects whose motion usually violates the constraint [13, 23, 36, 37]. Adding the constraint as a soft prior can resolve this issue, but there is still the challenge of determining where to relinquish the constraint by only depending on the data term [34, 35].

Our approach explicitly resolves this ambiguity with the aid of semantic information, which provides information on the physical properties of objects (e.g., static or movable).

Temporally Consistent Semantic Segmentation. Among a broad literature on enabling temporal consistency of video segmentation, we specifically consider the case of semantic segmentation here. One common way to inject temporal consistency is to utilize motion and structure features from 3D point clouds obtained by Structure from Motion (SfM) [3, 7, 29]. Another way is to jointly reconstruct a scene in 3D with semantic labels through a batch process, naturally enabling temporally consistent segmentation [14, 26, 40]. In causal approaches that rely on temporal correspondence, previous approaches achieve accurate temporal correspondence using sparse feature tracking [25] or dense flow maps with a similarity function in feature space [22]. A recent work [15] introduces feature space optimization for spatio-temporal regularization in partitioned batches with overlaps.

We achieve temporal consistency for semantic segmentation using a jointly estimated, accurate dense flow map, which leverages the semantic information.

Optical Flow with Semantics. The question of exploiting semantics for optical flow has only received very limited attention so far. The most related approach is the very recent work by Sevilla-Lara et al. [27], which treats the problem sequentially. First, the scene is segmented into 3 semantic categories, things, planes and stuff. Second, motion is estimated individually for these semantic parts and later composited. In contrast, we treat the entire problem as the minimization of a single unified energy. Moreover, motion estimation and semantic segmentation are inferred jointly instead of sequentially, hence may mutually leverage each other. Experimentally, we report significantly more accurate motion estimates for dynamic objects and demonstrate improved segmentation performance.

3 Approach

The core idea put forward in this paper is that optical flow and semantic segmentation are mutually beneficial and are best estimated jointly to simultaneously improve each other. Figure 1 shows the flow of our proposed method in the temporal domain and explains which elements contribute to achieving which task. Here, we assume that some initial bottom-up semantic evidence is already given by an off-the-shelf algorithm, such as a CNN (e.g., [16]), which is subsequently refined by having temporal consistency. In the red-shaded region, a pair of consecutive images and their refined semantic segmentation contribute to estimating optical flow more accurately. At the same time, the temporally consistent semantic labeling at time \(t+1\) is inferred from its bottom-up evidence, the previously estimated semantic labeling at time t, and the estimated flow map. For longer sequences, our approach proceeds in an online manner on two frames at a time.

Similar to [38], our formulation is based on an 8-DoF piecewise-parametric model with a superpixelization of the scene. Superpixels play an important role in our formulation for connecting the two different domains: optical flow and semantic segmentation. One superpixel represents a global motion as well as a semantic label for its pixels inside, and the motion is constrained by the physical properties that the semantic label implies. For example, the motion of pixels corresponding to some physically-static objects (e.g., building or road) can only be caused by camera motion. Thus, enforcing the epipolar constraint on those pixels can effectively regularize their motion.

Another important feature of our formulation is that we explicitly formulate the occlusion relationship between superpixels [36, 37] and infer the occlusion mask as well. This directly affects the data term such that it prevents occluded pixels from dominating the data term during the optimization.

3.1 Preprocessing

Superpixels. As superpixels generally tend to separate objects in images, they can be a good medium for carrying semantic labels and representative motions for their pixels. Our approach uses the recent state-of-the-art work of Yao et al. [39], which has shown to be well suited for estimating optical flow.

Semantic Segmentation. For the bottom-up semantic evidence, we use an off-the-shelf fully convolutional network (FCN) [16] trained on the Cityscapes dataset [6], which contains typical objects frequent in street scenes.

Fundamental Matrix Estimation. In order to apply the epipolar constraint on superpixels for which their semantic label tells us that they are surely static objects (e.g. roads, buildings, etc.), our approach requires the fundamental matrix resulting from the camera motion. We use a standard approach, i.e. matching SIFT keypoints [18] and using the 8-point algorithm [11] with RANSAC [17].

3.2 Model

Our model jointly estimates (i) the optical flow between reference frame \(I^t\) and the next frame \(I^{t+1}\), and (ii) a temporally consistent semantic segmentation \(\mathbf {l}^{t+1}\) given bottom-up semantic evidence \(\mathbf {\hat{l}}^{t+1}\) and the previously estimated semantic labeling \(\mathbf {l}^{t}\). \(\mathbf {l}\) is a semantic label probability map, which has the same size as the input image and L channels, where L is the number of semantic classes. Instead of using a single label, we adopt label probabilities so that we can more naturally and continuously infer the semantic labels in the time domain. Note that we assume an online setting (i.e., no access to future information) and hence infer the segmentation at time \(t+1\) rather than t. Optical flow is represented by a set of piecewise motions of superpixels in the reference frame. We define the motion of a superpixel through a homography and formulate the objective for estimating the 8-DoF homography \(\mathbf {H}_s\) of each superpixel s and the temporally consistent semantic segmentation \(\mathbf {l}^{t+1}\) as:

(1)

Here, \(E_\text {D}\), \(E_\text {L}\), \(E_\text {P}\), \(E_\text {C}\), and \(E_\text {B}\) denote color data term, label data term, physical constraint term, connectivity term, and boundary occlusion prior, respectively.

We adopt two kinds of occlusion variables: the boundary occlusion label b between two superpixels, and the occlusion mask o defined at the pixel level. The boundary occlusion label b regularizes the spatial relationship between two neighboring superpixels (i.e., co-planar, hinge, left occlusion, or right occlusion) [36, 37]. The occlusion mask o explicitly models whether a pixel is occluded or not. One important difference to previous superpixel-based work [37] is that we additionally infer a pixelwise occlusion mask, which prevents occluded pixels from adversely affecting the data cost.

Data Terms. The data terms aggregate photometric differences

$$\begin{aligned} E_\text {D}(\mathbf {H}, o)&= \sum _{s \in S} \frac{1}{|s |} \underbrace{ \sum _{\mathbf {p} \in s} (1 - o_\mathbf {p} ) \rho _\text {D} \Big ( I^t(\mathbf {p}), I^{t+1}(\mathbf {p'}) \Big ) + o_\mathbf {p} \lambda _\text {o} }_{\text {image data}} \end{aligned}$$
(2)

and semantic label differences

$$\begin{aligned} E_\text {L}(\mathbf {H}, \mathbf {l}^{t+1}, o)&= \sum _{s \in S} \frac{1}{|s |} \sum _{\mathbf {p} \in s} \phi _l(\mathbf {H}, \mathbf {l}^{t+1}_\mathbf {p'}, o)\qquad \text {with}\end{aligned}$$
(3)
$$\begin{aligned} \phi _l(\mathbf {H}, \mathbf {l}^{t+1}_\mathbf {p'}, o)&= \frac{1}{2} \sum _{i}^{L} (1 - o_\mathbf {p}) \Big \Vert \mathbf {l}^{t+1}_{\mathbf {p'},i} - ( \alpha \mathbf {\hat{l}}^{t+1}_{\mathbf {p'},i} + (1-\alpha )\mathbf {l}^{t}_{\mathbf {p},i}) \Big \Vert ^2 \end{aligned}$$
(4)

over each pixel of each superpixel. Here, \(\mathbf {p'}\) is the corresponding pixel in \(I^{t+1}\) of pixel \(\mathbf {p}\) in \(I^{t}\), which is determined according to the homography \(\mathbf {H}_\mathcal {S(\mathbf {p})} \in \mathbb {R}^{3 \times 3}\) of its superpixel

$$\begin{aligned} \mathbf {p'} = \mathbf {H}_\mathcal {S(\mathbf {p})} \mathbf {p}, \end{aligned}$$
(5)

where \(\mathcal {S}: I^{t} \rightarrow S\) is a mapping that assigns a pixel \(\mathbf {p}\) to its superpixel \(s \in S\).

In the image data term in Eq. (2), the function \(\rho _D(\cdot ,\cdot )\) measures the photometric differences between two pixels using the ternary transform [28] and a truncated linear penalty. If a pixel \(\mathbf {p}\) is occluded (i.e., \(o_{\mathbf {p}} = 1\)), a constant penalty \(\lambda _\text {o}\) is applied.

The label data term in Eq. (3) measures the distance between two semantic label probability distributions over each pixel: (i) our estimation \(\mathbf {l}^{t+1}_{\mathbf {p'}}\) and (ii) a weighted sum of the previous estimation \(\mathbf {l}^{t}_{\mathbf {p}}\), which is propagated by the optical flow, and the bottom-up evidence \(\mathbf {\hat{l}}^{t+1}_{\mathbf {p'}}\), while considering its occlusion status. The motivation of the term is to penalize label differences to the bottom-up evidence and at the same time propagate label evidence over time, except when an occlusion takes place.

Physical Constraint Term. Semantic labels can provide useful cues for estimating optical flow. If pixels are labeled as physically static objects, such as building, road, or infrastructure, then they normally do not undergo any 3D motion, hence their observed 2D motion is caused by camera motion and should thus satisfy the epipolar constraint. We define the corresponding term as

$$\begin{aligned} E_\text {P}(\mathbf {H})= & {} \sum _{s \in S} \min ( \phi _{P}(s, \mathbf {H}_s), \lambda _{\mathrm {non\_st}} + \beta [l^t_s \in L_\text {st}] ),\end{aligned}$$
(6)
$$\begin{aligned} \text {where}\quad \phi _\text {st}(s, \mathbf {H}_s)= & {} \frac{1}{|s |} \sum _{\mathbf {p} \in s} {\Big \Vert ( \mathbf {p'}^\top \mathbf {F} \mathbf {p} )\Big \Vert _1 } = \frac{1}{|s |} \sum _{\mathbf {p} \in s} {\big \Vert ( (\mathbf {H}_\mathcal {S(\mathbf {p})} \mathbf {p})^\top \mathbf {F} \mathbf {p} )\big \Vert _1 } \end{aligned}$$
(7)

measures how well the homography matrix \(\mathbf {H}_s\) of a superpixel s meets the epipolar constraint from the fundamental matrix \(\mathbf {F}\). For non-static objects, such as pedestrians or vehicles, we still apply the epipolar penalty, however a weak one using a low truncation threshold \(\lambda _{\text {non}\_\mathrm{{st}}}\). This is motivated by the fact that possibly dynamic objects may in fact stand still and thus obey epipolar geometry, but we do not want to penalize them too much if they do not. For static objects, on the other hand, we augment the truncation threshold by \(\beta \) in order to give a stricter penalty. \(L_\text {st}\) is the set of semantic labels that corresponds to the physically static objects. \(l_s\) is a representative semantic label of superpixel s, which has the highest probability over the pixels in the superpixel: \(l_s = {{\mathrm{argmax}}}_i \sum _{\mathbf {p} \in s} \mathbf {l}^{t}_{\mathbf {p},i}\).

Fig. 2.
figure 2

(a) Four cases of boundary relations between two superpixels: co-planar, hinge, left occlusion, and right occlusion. (b) The visualization of the set of occluded pixels \(\varOmega _{s_\text {i} \rightarrow s_\text {j}}\) in the case of a left occlusion (black-colored region).

Connectivity Term. The connectivity term encourages the smoothness of motion between two neighboring superpixels based on their occlusion relationship:

$$\begin{aligned} E_\text {C}(\mathbf {H}, o, b)&= \sum _{s_i \sim s_j} \phi _\text {C} (\mathbf {H}_{s_i}, \mathbf {H}_{s_j}, o, b_{ij})\end{aligned}$$
(8)
$$\begin{aligned} \text {with}\quad \phi _\text {C} (\mathbf {H}_{s_i}, \mathbf {H}_{s_j}, o, b_{ij})&= \left\{ \begin{array}{llll} \phi _\text {co}(\mathbf {H}_{s_i}, \mathbf {H}_{s_j}, o) &{}&{}&{} \text {if } b_{ij} = \text {co-planar}, \\ \phi _\text {h}(\mathbf {H}_{s_i}, \mathbf {H}_{s_j}, o) &{}&{}&{} \text {if } b_{ij} = \text {hinge}, \\ \phi _\text {occ}(s_i, s_j, o) &{}&{}&{} \text {if } b_{ij} = \text {left occlusion}, \\ \phi _\text {occ}(s_j, s_i, o) &{}&{}&{} \text {if } b_{ij} = \text {right occlusion}. \\ \end{array}\right. \end{aligned}$$
(9)

As shown in Fig. 2(a), the boundary occlusion flag \(b_{ij}\) expresses the relationship between two neighboring superpixels \(s_i\) and \(s_j\) as co-planar, hinge, left-occlusion, or right-occlusion [36, 37]. This categorization helps to regularize the motion of two superpixels defined by their homography matrices. We distinguish between three different potentials:

$$\begin{aligned} \phi _\text {co}(\mathbf {H}_{s_i}, \mathbf {H}_{s_j}, o)= & {} \frac{1}{|s_i \cup s_j |} \sum _{\mathbf {p} \in s_i \cup s_j} ||\mathbf {H}_{s_{i}} \mathbf {p} - \mathbf {H}_{s_{j}} \mathbf {p} ||_1 + \sum _{\mathbf {p} \in s_i \cup s_j} \lambda _\text {imp}[o_p = 1] \end{aligned}$$
(10)
$$\begin{aligned} \phi _\text {h}(\mathbf {H}_{s_i}, \mathbf {H}_{s_j}, o)= & {} \frac{1}{ |\mathcal {B}_{s_{i},s_{j}} |} \sum _{\mathbf {p} \in \mathcal {B}_{s_{i},s_{j}}} ||\mathbf {H}_{s_{i}} \mathbf {p} - \mathbf {H}_{s_{j}} \mathbf {p} ||_1 + \sum _{\mathbf {p} \in s_i \cup s_j} \lambda _\text {imp}[o_p = 1] \end{aligned}$$
(11)
$$\begin{aligned} \phi _\text {occ}(s_\text {f}, s_\text {b}, o)= & {} \sum _{\mathbf {p} \in s_\text {f}} \lambda _\text {imp}[o_p = 1] \nonumber \\&+ \sum _{\mathbf {p} \in s_\text {b}} \Big ( \lambda _\text {imp}[ \mathbf {p} \in \varOmega _{s_\text {f} \rightarrow s_\text {b}} ][o_p = 0] + \lambda _\text {imp}[ \mathbf {p} \notin \varOmega _{s_\text {f} \rightarrow s_\text {b}} ][o_p = 1] \Big ) \end{aligned}$$
(12)

These are motivated as follows: When two superpixels are co-planar, all pixels within should follow the identical homography matrix as they are on the same plane. For a hinge relationship, only the pixels on the boundary set \(\mathcal {B}_{s_{i},s_{j}}\) can satisfy the motion from two superpixels \(s_i\) and \(s_j\). In both cases, there should be no occluded pixels, hence we adopt a very large ‘impossible’ penalty \(\lambda _\text {imp}\) to prevent occluded pixels from occurring. In case that one superpixel occludes another, their motions only affect the occlusion masks. Equation (12) expresses the case that pixels of the front superpixel \(s_\text {f}\) occlude some pixels of the back superpixel \(s_\text {b}\). As shown in Fig. 2(b), \(\varOmega _{s_\text {f} \rightarrow s_\text {b}}\) is a set of pixels in \(s_\text {b}\) that is occluded by some pixels in \(s_\text {f}\) from the motion. All pixels in the front superpixel \(s_\text {f}\) should not be occluded, and only pixels in the set of \(\varOmega _{s_\text {f} \rightarrow s_\text {b}}\) in \(s_\text {b}\) should be occluded.

Boundary Occlusion Prior. Without an additional prior term, the boundary occlusion flag in the connectivity term would prefer to take the occlusion cases. We thus define a prior term to yield proper biases for each case:

$$\begin{aligned} E_\text {B}(b) = \left\{ \begin{array}{llll} \lambda _\text {co} [l_{s_i} \ne l_{s_j}] &{}&{}&{} \text {if } b_{ij} = \text {co-planar}, \\ \lambda _\text {h} &{}&{}&{} \text {if } b_{ij} = \text {hinge}, \\ \lambda _\text {occ} &{}&{}&{} \text {if } b_{ij} = \text {occlusion}, \end{array}\right. \end{aligned}$$
(13)

where \(\lambda _\text {occ}> \lambda _\text {h}> \lambda _\text {co} > 0\). Because it is less likely that two different objects are co-planar in the real world, we only apply the prior penalty for the co-planar case \(\lambda _\text {co}\) when the respective semantic labels of the superpixels differ.

3.3 Optimization

The minimization of our objective is challenging, as it combines discrete (i.e., \(\{l^{t+1}, b, o\}\)) and continuous (i.e., \(\mathbf {H}\)) variables. We use a block coordinate descent algorithm. As shown in Algorithm 1, we iteratively update each variable in the order: (i) homography matrices \(\mathbf {H}\) for superpixels, (ii) occlusion variables b, o, and (iii) semantic label probability maps \(\mathbf {l}^{t+1}\). Optimizing the homography matrices \(\mathbf {H}\) is especially challenging because the matrices have 8 DoF in 2D space and their parameterization incurs a high-dimensional search space. We address this using PatchMatch Belief Propagation (PMBP) [2]; see below for details.

Once the motion \(\mathbf {H}\) is updated, occlusion variables can be easily updated independently for each pair of neighboring superpixels, while other variables are held fixed. Given their motions, we first calculate the overlapping region, which can potentially be the occluded region for one of the two superpixels. Then, we calculate the energy in Eq. (1) for all four boundary occlusion cases with the candidate occlusion pixels given. The boundary occlusion case that has the minimum energy is taken, including the corresponding occlusion mask state. Finally, the semantic label probability map \(\mathbf {l}^{t+1}\) can also be easily updated independently for all superpixels by minimizing label data term in Eq. (3).

figure a

Optimizing Homography Matrices Using PMBP. Our method optimizes the homography matrices in the continuous domain using PatchMatch Belief Propagation (PMBP) [12]. PMBP is a simple but powerful optimizer based on Belief Propagation. Instead of using a discrete label set, PMBP uses a set of particles that is randomly sampled and propagated in the continuous domain. PMBP requires an effective way of proposing the random particles; typically they are obtained from a normal distribution defined over some parameters. In our approach, however, we devise several strategies for proposing particles of the homography matrices without over-parameterization. Between two image patches, a superpixel and its corresponding region in the other frame, we estimate the homography matrix by using (i) LK warping, (ii) 3 correspondences and the fundamental matrix, (iii) 4 randomly perturbed correspondences, and (iv) sampled correspondences from neighboring superpixels. Empirically, we find that these strategies generate reasonable particles without requiring an over-parameterization, and only 5 outer-iterations are enough to be converged.

4 Experiments

We verify the effectiveness of our approach with a series of experiments on the well-established KITTI benchmark [9]. To the best of our knowledge, there is no dataset that simultaneously provides ground truth for optical flow and semantic segmentation in the same scenes; while ground truth for both is available in the KITTI benchmark, the evaluation is carried out on disjoint sequences.

We first evaluate our optical flow results on the KITTI Optical Flow 2015 benchmark and compare to the top-performing algorithms in the benchmark. In addition, we analyze the effectiveness of the semantics-related terms to understand how effectively the semantic information contributes to the estimation of optical flow. Finally, we demonstrate qualitative and quantitative results for temporally consistent semantic segmentation. We use DiscreteFlow [21] to initialize the flow estimation and utilize the FCN model [16] trained on the Cityscapes dataset [6] for bottom-up semantic segmentation evidence. We set our parameters automatically using Bayesian optimization [19] on the training portion.

Table 1. KITTI optical flow 2015: comparison to the published top-performing optical flow methods in the benchmark. Our method leads to state-of-the-art results and significantly increases the performance on challenging dynamic regions (fg).
Fig. 3.
figure 3

Results on KITTI Optical Flow 2015. Left: Source images overlaid with semantic segmentation results. Middle: Our flow estimation results. Right: Qualitative comparison with DiscreteFlow: gray pixels – both methods correct, skyblue pixels – our method is correct but DiscreteFlow is not, red pixels – DiscreteFlow is correct but ours is not, and yellow pixels – both failed. (Color figure online)

4.1 KITTI 2015 Optical Flow

We compare to the top-scoring optical flow methods on the KITTI Optical Flow 2015 benchmark, which have been published at the time of submission. Note that we do not consider scene flow methods here, as they have access to multiple views. Table 1 shows the results. Fl-bg, Fl-fg, and Fl-all denote the flow error evaluated for background pixels only, foreground pixels only, or for all pixels, respectively. Our method outperforms all top-scoring methods when considering all non-occluded pixels and performs very close to the leading method when considering all pixels. Especially for the flow of dynamic foreground objects, our method outperforms all published results by a large margin. This is of particular importance in the domain of autonomous navigation where understanding the motion of other traffic participants is crucial. This substantial performance gain stems from several design decisions. First, our piecewise motion representation effectively abstracts the planar surfaces of foreground vehicles, and the 8-DoF homography successfully describes the rigid motion of each surface.

The soft epipolar constraint of our model, derived from the jointly estimated semantics, contributes to the flow estimation particularly on background pixels and clear performance gains are observed for non-occluded pixels. When including occluded pixels, however, SOF [27] slightly outperforms ours. The main reason is that their localized layer approach and planar approximation with large pieces can regularize the occluded regions better than our piecewise model based on superpixels. In future work, this gap may be addressed through an additional global support model or coarse-to-fine estimation. MotionSLIC [36] still performs better than ours on background pixels by strictly enforcing the epipolar constraint. As a trade-off, however, their strict epipolar constraint yields significant flow errors for foreground pixels and eventually increases the overall error.

Figure 3 shows visual results on the KITTI dataset (visualized as in [27]) and provides a direct comparison to DiscreteFlow, which highlights where the performance gain over the initialization originates. Our method provides more accurate flow estimates on foreground objects, but also on static objects.

4.2 Effectiveness of Semantic-Related Terms

Next we analyze the effectiveness of the semantic-related terms, the epipolar constraint term and the label data term, in order to understand how much the semantic information contributes to optical flow estimation over our basic piecewise optical flow model. We turned off each term and evaluated how each setting affects the flow estimation results on the KITTI Flow 2015 training dataset. The analysis is shown in Table 2.

We find that the label term clearly contributes to more accurate flow estimation overall, but it has a side-effect on background areas where the initial semantic segmentation may have some outliers. Using the epipolar constraint term results in more accurate flow estimates on background areas, which majorly satisfy the epipolar assumption. On foreground objects, however, the flow error slightly increases. This performance loss is coming from the trade-off of our assumption that non-static objects (e.g., vehicles) sometimes do not move, which made us apply the epipolar cost but with a small truncation threshold.

One interesting observation is that our basic piecewise flow model, without the semantic-related terms, still demonstrates competitive performance for estimating optical flow on non-occluded pixels.

Table 2. Effectiveness of semantic-related terms: the performance of our basic piecewise optical flow model is boosted further (KITTI 2015 training set).

4.3 Temporally Consistent Semantic Segmentation

We finally evaluate the performance of our temporally consistent semantic segmentation on a sequence from the KITTI dataset, which has a 3rd-party ground truth semantic annotation [24]. This, however, is a preliminary result, since the semantic segmentation model we used here is trained on the higher-resolution Cityscapes dataset [6], which possesses somewhat different statistics. Better results are expected from a custom-trained model. Table 3 shows that our joint approach increases the segmentation accuracy over the bottom-up segmentation results [16] by 2 % points in the intersection-over-union (IoU) metric. The accuracy is increased on all object classes except for the pole class, which is not well captured by our superpixels. Figure 4(a) shows our results on three consecutive frames, and Fig. 4(b) demonstrates our performance gain/loss over the bottom-up segmentation using the visualization of Fig. 3. With the aid of accurate temporal correspondences, our method revises inconsistent results and effectively reduces false positives in the time domain.

Table 3. Performance of temporally consistent semantic segmentation.
Fig. 4.
figure 4

Temporally consistent semantic segmentation results.

5 Conclusion

We have proposed a method for jointly estimating optical flow and temporally consistent semantic segmentation from monocular video. Our results on the challenging KITTI benchmark demonstrated that both tasks can successfully leverage each other. A piecewise optical flow model with PMBP inference builds the basis and itself already achieves competitive results. Embedding semantic information through label consistency and epipolar constraints further boosts the performance. For dynamic objects, which are particularly important from the viewpoint of autonomous navigation, our method outperforms all published results in the benchmark by a large margin. Preliminary results on temporally consistent semantic segmentation further demonstrate the benefit of our approach by reducing false positives and flickering. We believe that a refinement of the superpixels may lead to further performance gains in the future.