Advertisement

Turning a Blind Eye: Explicit Removal of Biases and Variation from Deep Neural Network Embeddings

  • Mohsan AlviEmail author
  • Andrew Zisserman
  • Christoffer Nellåker
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11129)

Abstract

Neural networks achieve the state-of-the-art in image classification tasks. However, they can encode spurious variations or biases that may be present in the training data. For example, training an age predictor on a dataset that is not balanced for gender can lead to gender biased predicitons (e.g. wrongly predicting that males are older if only elderly males are in the training set). We present two distinct contributions:
  1. (1)

    An algorithm that can remove multiple sources of variation from the feature representation of a network. We demonstrate that this algorithm can be used to remove biases from the feature representation, and thereby improve classification accuracies, when training networks on extremely biased datasets.

     
  2. (2)

    An ancestral origin database of 14,000 images of individuals from East Asia, the Indian subcontinent, sub-Saharan Africa, and Western Europe. We demonstrate on this dataset, for a number of facial attribute classification tasks, that we are able to remove racial biases from the network feature representation.

     

Keywords

Dataset bias Face attribute classification Ancestral origin dataset 

Notes

Acknowledgments

This research was financially supported by the EPSRC programme grant Seebibyte EP/M013774/1, the EPSRC EP/G036861/1, and the MRC Grant MR/M014568/1.

References

  1. 1.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition, ArXiv e-prints, December 2015Google Scholar
  2. 2.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  3. 3.
    Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-Scale Image Recognition. ArXiv e-prints, September 2014Google Scholar
  4. 4.
    Tommasi, T., Patricia, N., Caputo, B., Tuytelaars, T.: A Deeper Look at Dataset Bias. ArXiv e-prints, May 2015Google Scholar
  5. 5.
    Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: CVPR (2011)Google Scholar
  6. 6.
    Cao, Q., Shen, L., Xie, W., Parkhi, O.M., Zisserman, A.: VGGFace2: A dataset for recognising faces across pose and age, p. 8Google Scholar
  7. 7.
    Guo, Y., Zhang, L., Hu, Y., He, X., Gao, J.: Ms-celeb-1m: Challenge of recognizing one million celebrities in the real world, February 2016Google Scholar
  8. 8.
    Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical report 07–49, University of Massachusetts, Amherst, October 2007Google Scholar
  9. 9.
    Kemelmacher, I., Seitz, S., Miller, D., Brossard, E.: The megaface benchmark: 1 million faces for recognition at scale, December 2015Google Scholar
  10. 10.
    Kostinger, M., Wohlhart, P., Roth, P.M., Bischof, H.: Annotated facial landmarks in the wild: a large-scale, real-world database for facial landmark localization, pp. 2144–2151. IEEE, November 2011Google Scholar
  11. 11.
    Parkhi, O., Vedaldi, A., Zisserman, A.: Deep face recognition. In: BMVC, vol. 1, no. 3, p. 6, September 2015Google Scholar
  12. 12.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. ArXiv e-prints, November 2013Google Scholar
  13. 13.
    Long, M., Cao, Y., Wang, J., Jordan, M.I.: Learning Transferable Features with Deep Adaptation Networks. ArXiv e-prints, February 2015Google Scholar
  14. 14.
    Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Learning and transferring mid-level image representations using convolutional neural networks, pp. 1717–1724. IEEE, June 2014Google Scholar
  15. 15.
    Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Friedler, S.A., Wilson, C. (eds.) Proceedings of the 1st Conference on Fairness, Accountability and Transparency. Proceedings of Machine Learning Research, PMLR, New York, NY, USA, 23–24 Feb 2018, vol. 81, pp. 77–91 (2018)Google Scholar
  16. 16.
    Ras, G., van Gerven, M., Haselager, P.: Explanation Methods in Deep Learning: Users, Values. Concerns and Challenges, ArXiv e-prints, March 2018Google Scholar
  17. 17.
    Sweeney, L.: Discrimination in Online Ad Delivery. ArXiv e-prints, January 2013CrossRefGoogle Scholar
  18. 18.
    Zhao, J., Wang, T., Yatskar, M., Ordonez, V., Chang, K.W.: Men also like shopping: reducing gender bias amplification using corpus-level constraints. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, pp. 2979–2989. Association for Computational Linguistics, September 2017Google Scholar
  19. 19.
    Tzeng, E., Hoffman, J., Darrell, T., Saenko, K.: Simultaneous deep transfer across domains and tasks. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 4068–4076. IEEE (2015)Google Scholar
  20. 20.
    Gretton, A., Smola, A., Huang, J., Schmittfull, M., Borgwardt, K., Schlkopf, B.: Covariate shift and local learning by distribution matching. In: Dataset Shift in Machine Learning. Biologische Kybernetik, Cambridge, MA, USA, pp. 131–160 (2009)CrossRefGoogle Scholar
  21. 21.
    Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial Discriminative Domain Adaptation. ArXiv e-prints, February 2017Google Scholar
  22. 22.
    Yin, X., Liu, X.: Multi-task convolutional neural network for pose-invariant face recognition. IEEE Trans. Image Process. 27(2), 964–975 (2018)MathSciNetCrossRefGoogle Scholar
  23. 23.
    Jia, S., Lansdall-Welfare, T., Cristianini, N.: Right for the Right Reason: Training Agnostic Networks. ArXiv e-prints, June 2018CrossRefGoogle Scholar
  24. 24.
    Raff, E., Sylvester, J.: Gradient Reversal Against Discrimination. ArXiv e-prints, July 2018Google Scholar
  25. 25.
    Ganin, Y., et al.: Domain-adversarial training of neural networks. In: Csurka, G. (ed.) Domain Adaptation in Computer Vision Applications. ACVPR, pp. 189–209. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-58347-1_10CrossRefGoogle Scholar
  26. 26.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), Washington, DC, USA, vol. 1, pp. 886–893. IEEE Computer Society (2005)Google Scholar
  27. 27.
    Rothe, R., Timofte, R., Gool, L.V.: DEX: Deep EXpectation of apparent age from a single image, pp. 252–257. IEEE, December 2015Google Scholar
  28. 28.
    Chatfield, K., Simonyan, K., Vedaldi, A., Zisserman, A.: Return of the devil in the details: delving deep into convolutional nets. In: British Machine Vision Association, pp. 6:1–6:12 (2014)Google Scholar
  29. 29.
    Vedaldi, A., Lenc, K.: Matconvnet - convolutional neural networks for matlab. In: Proceedings of the ACM International Conference on Multimedia (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Mohsan Alvi
    • 1
    Email author
  • Andrew Zisserman
    • 1
  • Christoffer Nellåker
    • 2
    • 3
  1. 1.Visual Geometry Group, Department of Engineering ScienceUniversity of OxfordOxfordUK
  2. 2.Nuffield Department of Obstetrics and GynaecologyUniversity of OxfordOxfordUK
  3. 3.Big Data InstituteUniversity of OxfordOxfordUK

Personalised recommendations