Keywords

1 Introduction

The architecture of deep convolutional neutral networks (CNNs) has evolved for years, becoming more accurate and faster. Since the milestone work of AlexNet [15], the ImageNet classification accuracy has been significantly improved by novel structures, including VGG [25], GoogLeNet [28], ResNet [5, 6], DenseNet [11], ResNeXt [33], SE-Net [9], and automatic neutral architecture search [18, 21, 39], to name a few.

Besides accuracy, computation complexity is another important consideration. Real world tasks often aim at obtaining best accuracy under a limited computational budget, given by target platform (e.g., hardware) and application scenarios (e.g., auto driving requires low latency). This motivates a series of works towards light-weight architecture design and better speed-accuracy tradeoff, including Xception [2], MobileNet [8], MobileNet V2 [24], ShuffleNet [35], and CondenseNet [10], to name a few. Group convolution and depth-wise convolution are crucial in these works.

To measure the computation complexity, a widely used metric is the number of float-point operations, or FLOPsFootnote 1. However, FLOPs is an indirect metric. It is an approximation of, but usually not equivalent to the direct metric that we really care about, such as speed or latency. Such discrepancy has been noticed in previous works [7, 19, 24, 30]. For example, MobileNet v2 [24] is much faster than NASNET-A [39] but they have comparable FLOPs. This phenomenon is further exmplified in Fig. 1(c) and (d), which show that networks with similar FLOPs have different speeds. Therefore, using FLOPs as the only metric for computation complexity is insufficient and could lead to sub-optimal design.

Fig. 1.
figure 1

Measurement of accuracy (ImageNet classification on validation set), speed and FLOPs of four network architectures on two hardware platforms with four different level of computation complexities (see text for details). (a, c) GPU results, \(batch size=8\). (b, d) ARM results, \(batch size=1\). The best performing algorithm, our proposed ShuffleNet v2, is on the top right region, under all cases.

The discrepancy between the indirect (FLOPs) and direct (speed) metrics can be attributed to two main reasons. First, several important factors that have considerable affection on speed are not taken into account by FLOPs. One such factor is memory access cost (MAC). Such cost constitutes a large portion of runtime in certain operations like group convolution. It could be bottleneck on devices with strong computing power, e.g., GPUs. This cost should not be simply ignored during network architecture design. Another one is degree of parallelism. A model with high degree of parallelism could be much faster than another one with low degree of parallelism, under the same FLOPs.

Second, operations with the same FLOPs could have different running time, depending on the platform. For example, tensor decomposition is widely used in early works [14, 36, 37] to accelerate the matrix multiplication. However, the recent work [7] finds that the decomposition in [36] is even slower on GPU although it reduces FLOPs by 75%. We investigated this issue and found that this is because the latest CUDNN [1] library is specially optimized for \(3\times 3\) conv. We cannot certainly think that \(3\times 3\) conv is 9 times slower than \(1\times 1\) conv.

With these observations, we propose that two principles should be considered for effective network architecture design. First, the direct metric (e.g., speed) should be used instead of the indirect ones (e.g., FLOPs). Second, such metric should be evaluated on the target platform.

In this work, we follow the two principles and propose a more effective network architecture. In Sect. 2, we firstly analyze the runtime performance of two representative state-of-the-art networks [24, 35]. Then, we derive four guidelines for efficient network design, which are beyond only considering FLOPs. While these guidelines are platform independent, we perform a series of controlled experiments to validate them on two different platforms (GPU and ARM) with dedicated code optimization, ensuring that our conclusions are state-of-the-art.

In Sect. 3, according to the guidelines, we design a new network structure. As it is inspired by ShuffleNet [35], it is called ShuffleNet V2. It is demonstrated much faster and more accurate than the previous networks on both platforms, via comprehensive validation experiments in Sect. 4. Figure 1(a) and (b) gives an overview of comparison. For example, given the computation complexity budget of 40M FLOPs, ShuffleNet v2 is 3.5% and 3.7% more accurate than ShuffleNet v1 and MobileNet v2, respectively.

Fig. 2.
figure 2

Run time decomposition on two representative state-of-the-art network architectures, ShuffeNet v1 [35] (1\(\times \), \(g=3\)) and MobileNet v2 [24] (1\(\times \)).

2 Practical Guidelines for Efficient Network Design

Our study is performed on two widely adopted hardwares with industry-level optimization of CNN library. We note that our CNN library is more efficient than most open source libraries. Thus, we ensure that our observations and conclusions are solid and of significance for practice in industry.

  • GPU. A single NVIDIA GeForce GTX 1080Ti is used. The convolution library is CUDNN 7.0 [1]. We also activate the benchmarking function of CUDNN to select the fastest algorithms for different convolutions respectively.

  • ARM. A Qualcomm Snapdragon 810. We use a highly-optimized Neon-based implementation. A single thread is used for evaluation.

Other settings include: full optimization options (e.g. tensor fusion, which is used to reduce the overhead of small operations) are switched on. The input image size is \(224\times 224\). Each network is randomly initialized and evaluated for 100 times. The average runtime is used.

To initiate our study, we analyze the runtime performance of two state-of-the-art networks, ShuffleNet v1 [35] and MobileNet v2 [24]. They are both highly efficient and accurate on ImageNet classification task. They are both widely used on low end devices such as mobiles. Although we only analyze these two networks, we note that they are representative for the current trend. At their core are group convolution and depth-wise convolution, which are also crucial components for other state-of-the-art networks, such as ResNeXt [33], Xception [2], MobileNet [8], and CondenseNet [10].

The overall runtime is decomposed for different operations, as shown in Fig. 2. We note that the FLOPs metric only account for the convolution part. Although this part consumes most time, the other operations including data I/O, data shuffle and element-wise operations (AddTensor, ReLU, etc) also occupy considerable amount of time. Therefore, FLOPs is not an accurate enough estimation of actual runtime.

Based on this observation, we perform a detailed analysis of runtime (or speed) from several different aspects and derive several practical guidelines for efficient network architecture design.

Table 1. Validation experiment for Guideline 1. Four different ratios of number of input/output channels (c1 and c2) are tested, while the total FLOPs under the four ratios is fixed by varying the number of channels. Input image size is \(56\times 56\).

(G1) Equal Channel width Minimizes Memory Access Cost (MAC). The modern networks usually adopt depthwise separable convolutions [2, 8, 24, 35], where the pointwise convolution (i.e., \(1\times 1\) convolution) accounts for most of the complexity [35]. We study the kernel shape of the \(1\times 1\) convolution. The shape is specified by two parameters: the number of input channels \(c_1\) and output channels \(c_2\). Let h and w be the spatial size of the feature map, the FLOPs of the \(1\times 1\) convolution is \(B=hwc_1c_2\).

For simplicity, we assume the cache in the computing device is large enough to store the entire feature maps and parameters. Thus, the memory access cost (MAC), or the number of memory access operations, is \(\text {MAC} = hw(c_1 + c_2) + c_1c_2\). Note that the two terms correspond to the memory access for input/output feature maps and kernel weights, respectively.

From mean value inequality, we have

$$\begin{aligned} \text {MAC} \ge 2\sqrt{hwB} + \frac{B}{hw}. \end{aligned}$$
(1)

Therefore, MAC has a lower bound given by FLOPs. It reaches the lower bound when the numbers of input and output channels are equal.

The conclusion is theoretical. In practice, the cache on many devices is not large enough. Also, modern computation libraries usually adopt complex blocking strategies to make full use of the cache mechanism [3]. Therefore, the real MAC may deviate from the theoretical one. To validate the above conclusion, an experiment is performed as follows. A benchmark network is built by stacking 10 building blocks repeatedly. Each block contains two convolution layers. The first contains \(c_1\) input channels and \(c_2\) output channels, and the second otherwise.

Table 1 reports the running speed by varying the ratio \(c_1:c_2\) while fixing the total FLOPs. It is clear that when \(c_1:c_2\) is approaching 1 : 1, the MAC becomes smaller and the network evaluation speed is faster.

Table 2. Validation experiment for Guideline 2. Four values of group number g are tested, while the total FLOPs under the four values is fixed by varying the total channel number c. Input image size is \(56\times 56\).

(G2) Excessive Group Convolution Increases MAC. Group convolution is at the core of modern network architectures [12, 26, 31, 33,34,35]. It reduces the computational complexity (FLOPs) by changing the dense convolution between all channels to be sparse (only within groups of channels). On one hand, it allows usage of more channels given a fixed FLOPs and increases the network capacity (thus better accuracy). On the other hand, however, the increased number of channels results in more MAC.

Formally, following the notations in G1 and Eq. 1, the relation between MAC and FLOPs for \(1\times 1\) group convolution is

$$\begin{aligned} \begin{aligned} \text {MAC}&= hw(c_1 + c_2) + \frac{c_1 c_2}{g} \\&= hwc_1 + \frac{Bg}{c_1} + \frac{B}{hw}, \end{aligned} \end{aligned}$$
(2)

where g is the number of groups and \(B=hwc_1c_2/g\) is the FLOPs. It is easy to see that, given the fixed input shape \(c_1 \times h \times w\) and the computational cost B, MAC increases with the growth of g.

To study the affection in practice, a benchmark network is built by stacking 10 pointwise group convolution layers. Table 2 reports the running speed of using different group numbers while fixing the total FLOPs. It is clear that using a large group number decreases running speed significantly. For example, using 8 groups is more than two times slower than using 1 group (standard dense convolution) on GPU and up to 30% slower on ARM. This is mostly due to increased MAC. We note that our implementation has been specially optimized and is much faster than trivially computing convolutions group by group.

Therefore, we suggest that the group number should be carefully chosen based on the target platform and task. It is unwise to use a large group number simply because this may enable using more channels, because the benefit of accuracy increase can easily be outweighed by the rapidly increasing computational cost.

Table 3. Validation experiment for Guideline 3. c denotes the number of channels for 1-fragment. The channel number in other fragmented structures is adjusted so that the FLOPs is the same as 1-fragment. Input image size is \(56\times 56\).
Table 4. Validation experiment for Guideline 4. The ReLU and shortcut operations are removed from the “bottleneck” unit [5], separately. c is the number of channels in unit. The unit is stacked repeatedly for 10 times to benchmark the speed.

(G3) Network Fragmentation Reduces Degree of Parallelism. In the GoogLeNet series [13, 27,28,29] and auto-generated architectures [18, 21, 39]), a “multi-path” structure is widely adopted in each network block. A lot of small operators (called“fragmented operators” here) are used instead of a few large ones. For example, in NASNET-A [39] the number of fragmented operators (i.e. the number of individual convolution or pooling operations in one building block) is 13. In contrast, in regular structures like ResNet [5], this number is 2 or 3.

Though such fragmented structure has been shown beneficial for accuracy, it could decrease efficiency because it is unfriendly for devices with strong parallel computing powers like GPU. It also introduces extra overheads such as kernel launching and synchronization.

To quantify how network fragmentation affects efficiency, we evaluate a series of network blocks with different degrees of fragmentation. Specifically, each building block consists of from 1 to 4 \(1\times 1\) convolutions, which are arranged in sequence or in parallel. The block structures are illustrated in appendix. Each block is repeatedly stacked for 10 times. Results in Table 3 show that fragmentation reduces the speed significantly on GPU, e.g. 4-fragment structure is 3\(\times \) slower than 1-fragment. On ARM, the speed reduction is relatively small.

Fig. 3.
figure 3

Building blocks of ShuffleNet v1 [35] and this work. (a): the basic ShuffleNet unit; (b) the ShuffleNet unit for spatial down sampling (\(2\times \)); (c) our basic unit; (d) our unit for spatial down sampling (\(2\times \)). DWConv: depthwise convolution. GConv: group convolution.

(G4) Element-wise Operations are Non-negligible. As shown in Fig. 2, in light-weight models like [24, 35], element-wise operations occupy considerable amount of time, especially on GPU. Here, the element-wise operators include ReLU, AddTensor, AddBias, etc. They have small FLOPs but relatively heavy MAC. Specially, we also consider depthwise convolution [2, 8, 24, 35] as an element-wise operator as it also has a high MAC/FLOPs ratio.

For validation, we experimented with the “bottleneck” unit (\(1\times 1\) conv followed by \(3\times 3\) conv followed by \(1\times 1\) conv, with ReLU and shortcut connection) in ResNet [5]. The ReLU and shortcut operations are removed, separately. Runtime of different variants is reported in Table 4. We observe around 20% speedup is obtained on both GPU and ARM, after ReLU and shortcut are removed.

Conclusion and Discussions. Based on the above guidelines and empirical studies, we conclude that an efficient network architecture should (1) use”balanced“convolutions (equal channel width); (2) be aware of the cost of using group convolution; (3) reduce the degree of fragmentation; and (4) reduce element-wise operations. These desirable properties depend on platform characterics (such as memory manipulation and code optimization) that are beyond theoretical FLOPs. They should be taken into accout for practical network design.

Recent advances in light-weight neural network architectures [2, 8, 18, 21, 24, 35, 39] are mostly based on the metric of FLOPs and do not consider these properties above. For example, ShuffleNet v1 [35] heavily depends group convolutions (against G2) and bottleneck-like building blocks (against G1). MobileNet v2 [24] uses an inverted bottleneck structure that violates G1. It uses depthwise convolutions and ReLUs on “thick” feature maps. This violates G4. The auto-generated structures [18, 21, 39] are highly fragmented and violate G3.

Table 5. Overall architecture of ShuffleNet v2, for four different levels of complexities.

3 ShuffleNet V2: An Efficient Architecture

Review of ShuffleNet v1 [35]. ShuffleNet is a state-of-the-art network architecture. It is widely adopted in low end devices such as mobiles. It inspires our work. Thus, it is reviewed and analyzed at first.

According to [35], the main challenge for light-weight networks is that only a limited number of feature channels is affordable under a given computation budget (FLOPs). To increase the number of channels without significantly increasing FLOPs, two techniques are adopted in [35]: pointwise group convolutions and bottleneck-like structures. A “channel shuffle” operation is then introduced to enable information communication between different groups of channels and improve accuracy. The building blocks are illustrated in Fig. 3(a) and (b).

As discussed in Sect. 2, both pointwise group convolutions and bottleneck structures increase MAC (G1 and G2). This cost is non-negligible, especially for light-weight models. Also, using too many groups violates G3. The element-wise “Add” operation in the shortcut connection is also undesirable (G4). Therefore, in order to achieve high model capacity and efficiency, the key issue is how to maintain a large number and equally wide channels with neither dense convolution nor too many groups.

Channel Split and ShuffleNet V2. Towards above purpose, we introduce a simple operator called channel split. It is illustrated in Fig. 3(c). At the beginning of each unit, the input of c feature channels are split into two branches with \(c - c'\) and \(c'\) channels, respectively. Following G3, one branch remains as identity. The other branch consists of three convolutions with the same input and output channels to satisfy G1. The two \(1\times 1\) convolutions are no longer group-wise, unlike [35]. This is partially to follow G2, and partially because the split operation already produces two groups.

After convolution, the two branches are concatenated. So, the number of channels keeps the same (G1). The same “channel shuffle” operation as in [35] is then used to enable information communication between the two branches.

After the shuffling, the next unit begins. Note that the “Add” operation in ShuffleNet v1 [35] no longer exists. Element-wise operations like ReLU and depthwise convolutions exist only in one branch. Also, the three successive element-wise operations,“Concat”, “Channel Shuffle” and“Channel Split”, are merged into a single element-wise operation. These changes are beneficial according to G4.

For spatial down sampling, the unit is slightly modified and illustrated in Fig. 3(d). The channel split operator is removed. Thus, the number of output channels is doubled.

The proposed building blocks (c)(d), as well as the resulting networks, are called ShuffleNet V2. Based the above analysis, we conclude that this architecture design is highly efficient as it follows all the guidelines.

The building blocks are repeatedly stacked to construct the whole network. For simplicity, we set \(c'=c/2\). The overall network structure is similar to ShuffleNet v1 [35] and summarized in Table 5. There is only one difference: an additional \(1\times 1\) convolution layer is added right before global averaged pooling to mix up features, which is absent in ShuffleNet v1. Similar to [35], the number of channels in each block is scaled to generate networks of different complexities, marked as 0.5\(\times \), 1\(\times \), etc.

Fig. 4.
figure 4

Illustration of the patterns in feature reuse for DenseNet [11] and ShuffleNet V2. (a) (courtesy of [11]) the average absolute filter weight of convolutional layers in a model. The color of pixel (sl) encodes the average l1-norm of weights connecting layer s to l. (b) The color of pixel (sl) means the number of channels directly connecting block s to block l in ShuffleNet v2. All pixel values are normalized to [0, 1]. (Color figure online)

Analysis of Network Accuracy. ShuffleNet v2 is not only efficient, but also accurate. There are two main reasons. First, the high efficiency in each building block enables using more feature channels and larger network capacity.

Second, in each block, half of feature channels (when \(c'=c/2\)) directly go through the block and join the next block. This can be regarded as a kind of feature reuse, in a similar spirit as in DenseNet [11] and CondenseNet [10].

In DenseNet[11], to analyze the feature reuse pattern, the l1-norm of the weights between layers are plotted, as in Fig. 4(a). It is clear that the connections between the adjacent layers are stronger than the others. This implies that the dense connection between all layers could introduce redundancy. The recent CondenseNet [10] also supports the viewpoint.

In ShuffleNet V2, it is easy to prove that the number of “directly-connected” channels between i-th and \((i+j)\)-th building block is \(r^jc\), where \(r=(1-c')/c\). In other words, the amount of feature reuse decays exponentially with the distance between two blocks. Between distant blocks, the feature reuse becomes much weaker. Figure 4(b) plots the similar visualization as in (a), for \(r=0.5\). Note that the pattern in (b) is similar to (a).

Thus, the structure of ShuffleNet V2 realizes this type of feature re-use pattern by design. It shares the similar benefit of feature re-use for high accuracy as in DenseNet [11], but it is much more efficient as analyzed earlier. This is verified in experiments, Table 8.

4 Experiment

Our ablation experiments are performed on ImageNet 2012 classification dataset [4, 23]. Following the common practice [8, 24, 35], all networks in comparison have four levels of computational complexity, i.e. about 40, 140, 300 and 500+ MFLOPs. Such complexity is typical for mobile scenarios. Other hyper-parameters and protocols are exactly the same as ShuffleNet v1 [35].

We compare with following network architectures [2, 11, 24, 35]:

  • ShuffleNet v1 [35]. In [35], a series of group numbers g is compared. It is suggested that the \(g=3\) has better trade-off between accuracy and speed. This also agrees with our observation. In this work we mainly use \(g=3\).

  • MobileNet v2 [24]. It is better than MobileNet v1 [8]. For comprehensive comparison, we report accuracy in both original paper [24] and our reimplemention, as some results in [24] are not available.

  • Xception [2]. The original Xception model [2] is very large (FLOPs >2G), which is out of our range of comparison. The recent work [16] proposes a modified light weight Xception structure that shows better trade-offs between accuracy and efficiency. So, we compare with this variant.

  • DenseNet [11]. The original work [11] only reports results of large models (FLOPs >2G). For direct comparison, we reimplement it following the architecture settings in Table 5, where the building blocks in Stage 2–4 consist of DenseNet blocks. We adjust the number of channels to meet different target complexities.

Table 8 summarizes all the results. We analyze these results from different aspects.

Accuracy vs. FLOPs. It is clear that the proposed ShuffleNet v2 models outperform all other networks by a large marginFootnote 2, especially under smaller computational budgets. Also, we note that MobileNet v2 performs pooly at 40 MFLOPs level with \(224\times 224\) image size. This is probably caused by too few channels. In contrast, our model do not suffer from this drawback as our efficient design allows using more channels. Also, while both of our model and DenseNet [11] reuse features, our model is much more efficient, as discussed in Sect. 3.

Table 8 also compares our model with other state-of-the-art networks including CondenseNet [10], IGCV2 [31], and IGCV3 [26] where appropriate. Our model performs better consistently at various complexity levels.

Inference Speed vs. FLOPs/Accuracy. For four architectures with good accuracy, ShuffleNet v2, MobileNet v2, ShuffleNet v1 and Xception, we compare their actual speed vs. FLOPs, as shown in Fig. 1(c) and (d). More results on different resolutions are provided in Appendix Table 1.

ShuffleNet v2 is clearly faster than the other three networks, especially on GPU. For example, at 500MFLOPs ShuffleNet v2 is 58% faster than MobileNet v2, 63% faster than ShuffleNet v1 and 25% faster than Xception. On ARM, the speeds of ShuffleNet v1, Xception and ShuffleNet v2 are comparable; however, MobileNet v2 is much slower, especially on smaller FLOPs. We believe this is because MobileNet v2 has higher MAC (see G1 and G4 in Sect. 2), which is significant on mobile devices.

Compared with MobileNet v1 [8], IGCV2 [31], and IGCV3 [26], we have two observations. First, although the accuracy of MobileNet v1 is not as good, its speed on GPU is faster than all the counterparts, including ShuffleNet v2. We believe this is because its structure satisfies most of proposed guidelines (e.g. for G3, the fragments of MobileNet v1 are even fewer than ShuffleNet v2). Second, IGCV2 and IGCV3 are slow. This is due to usage of too many convolution groups (4 or 8 in [26, 31]). Both observations are consistent with our proposed guidelines.

Recently, automatic model search [18, 21, 22, 32, 38, 39] has become a promising trend for CNN architecture design. The bottom section in Table 8 evaluates some auto-generated models. We find that their speeds are relatively slow. We believe this is mainly due to the usage of too many fragments (see G3). Nevertheless, this research direction is still promising. Better models may be obtained, for example, if model search algorithms are combined with our proposed guidelines, and the direct metric (speed) is evaluated on the target platform.

Finally, Fig. 1(a) and (b) summarizes the results of accuracy vs. speed, the direct metric. We conclude that ShuffeNet v2 is best on both GPU and ARM.

Compatibility with Other Methods. ShuffeNet v2 can be combined with other techniques to further advance the performance. When equipped with Squeeze-and-excitation (SE) module [9], the classification accuracy of ShuffleNet v2 is improved by 0.5% at the cost of certain loss in speed. The block structure is illustrated in Appendix Fig. 2(b). Results are shown in Table 8 (bottom section).

Generalization to Large Models. Although our main ablation is performed for light weight scenarios, ShuffleNet v2 can be used for large models (e.g, FLOPs \(\ge \) 2G). Table 6 compares a 50-layer ShuffleNet v2 (details in Appendix) with the counterpart of ShuffleNet v1 [35] and ResNet-50 [5]. ShuffleNet v2 still outperforms ShuffleNet v1 at 2.3GFLOPs and surpasses ResNet-50 with 40% fewer FLOPs.

Table 6. Results of large models. See text for details.

For very deep ShuffleNet v2 (e.g. over 100 layers), for the training to converge faster, we slightly modify the basic ShuffleNet v2 unit by adding a residual path (details in Appendix). Table 6 presents a ShuffleNet v2 model of 164 layers equipped with SE [9] components (details in Appendix). It obtains superior accuracy over the previous state-of-the-art models [9] with much fewer FLOPs.

Object Detection. To evaluate the generalization ability, we also tested COCO object detection [17] task. We use the state-of-the-art light-weight detector – Light-Head RCNN [16] – as our framework and follow the same training and test protocols. Only backbone networks are replaced with ours. Models are pretrained on ImageNet and then finetuned on detection task. For training we use train+val set in COCO except for 5000 images from minival set, and use the minival set to test. The accuracy metric is COCO standard mmAP, i.e. the averaged mAPs at the box IoU thresholds from 0.5 to 0.95.

ShuffleNet v2 is compared with other three light-weight models: Xception [2, 16], ShuffleNet v1 [35] and MobileNet v2 [24] on four levels of complexities. Results in Table 7 show that ShuffleNet v2 performs the best.

Table 7. Performance on COCO object detection. The input image size is \(800\times 1200\). FLOPs row lists the complexity levels at \(224\times 224\) input size. For GPU speed evaluation, the batch size is 4. We do not test ARM because the PSRoI Pooling operation needed in [16] is unavailable on ARM currently.
Table 8. Comparison of several network architectures over classification error (on validation set, single center crop) and speed, on two platforms and four levels of computation complexity. Results are grouped by complexity levels for better comparison. The batch size is 8 for GPU and 1 for ARM. The image size is \(224\times 224\) except: [*] \(160\times 160\) and [**] \(192\times 192\). We do not provide speed measurements for CondenseNets [10] due to lack of efficient implementation currently.

Compared the detection result (Table 7) with classification result (Table 8), it is interesting that, on classification the accuracy rank is ShuffleNet v2 \(\ge \) MobileNet v2 > ShuffeNet v1 > Xception, while on detection the rank becomes ShuffleNet v2 > Xception \(\ge \) ShuffleNet v1 \(\ge \) MobileNet v2. This reveals that Xception is good on detection task. This is probably due to the larger receptive field of Xception building blocks than the other counterparts (7 vs. 3). Inspired by this, we also enlarge the receptive field of ShuffleNet v2 by introducing an additional \(3\times 3\) depthwise convolution before the first pointwise convolution in each building block. This variant is denoted as ShuffleNet v2*. With only a few additional FLOPs, it further improves accuracy.

We also benchmark the runtime time on GPU. For fair comparison the batch size is set to 4 to ensure full GPU utilization. Due to the overheads of data copying (the resolution is as high as \(800\times 1200\)) and other detection-specific operations (like PSRoI Pooling [16]), the speed gap between different models is smaller than that of classification. Still, ShuffleNet v2 outperforms others, e.g. around 40% faster than ShuffleNet v1 and 16% faster than MobileNet v2.

Furthermore, the variant ShuffleNet v2* has best accuracy and is still faster than other methods. This motivates a practical question: how to increase the size of receptive field? This is critical for object detection in high-resolution images [20]. We will study the topic in the future.

5 Conclusion

We propose that network architecture design should consider the direct metric such as speed, instead of the indirect metric like FLOPs. We present practical guidelines and a novel architecture, ShuffleNet v2. Comprehensive experiments verify the effectiveness of our new model. We hope this work could inspire future work of network architecture design that is platform aware and more practical.