Correction to: Multimed Tools Appl

https://doi.org/10.1007/s11042-017-5182-z

The original version of this article unfortunately contained mistakes. The author has misspelled the word “symmetric” as “asymmetric” in the article title and some occurrences in the texts. The correct article title is presented above and the sentences should be corrected as the below:

Page 4: What’s more, in this paper, we have proposed a symmetric method to improve the training of the original triplet loss, where the distance between the different persons’ images are constructed by two anchor images instead of using only one anchor image.

Page 7: 3.3 Introduction of the symmetric triplet loss function.

Page 8: In summary, the symmetric triplet loss function is defined as follows:

Page 9: The objective function in (1) consists of two terms, the first is the proposed symmetric triplet loss function.

Page 9: Such that for the optimization process, we mainly focus on the symmetric triplet loss function. We have used the stochastic gradient decent algorithm to jointly train the proposed CNN architecture with the symmetric triplet loss and the identification loss function.

Page 11: • Variant 1 (denoted as OursT): We train the CNN models only with the symmetric triplet loss function.

• Variant 3 (denoted as OursTI): We train the CNN models with the joint symmetric triplet and identification loss function.

Page 11: By comparing the identification(center loss embedded softmax) loss with symmetric triplet loss, we have found that the triplet loss function is better than the identification (softmax) loss on iLIDS and Prid2011 datasets.

Page 14: Then we propose a new algorithm with the joint supervision of the symmetric triplet loss function and the center loss embedded softmax cost function. In this framework, the CNN architecture is trained with two supervision signals: first, the symmetric triplet cost aims to produce features that pulls the instances of the same person closer, and pushes the instances belonging to different persons far away from each other in the learned feature space.