Skip to main content

A Guide to the NeurIPS 2018 Competitions

  • Conference paper
  • First Online:
The NeurIPS '18 Competition

Abstract

Competitions have become an integral part of advancing state-of-the-art in artificial intelligence (AI). They exhibit one important difference to benchmarks: Competitions test a system end-to-end rather than evaluating only a single component; they assess the practicability of an algorithmic solution in addition to assessing feasibility. In this volume, we present the details of eight competitions in the area of AI which took place between February to December 2018 and were presented at the Neural Information Processing Systems conference in Montreal, Canada on December 8, 2018. The competitions ranged from challenges in Robotics, Computer Vision, Natural Language Processing, Games, Health, Systems to Physics.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The MNIST database of handwritten digits has a training set of 60,000 examples, and a test set of 10,000 examples of 28 × 28 grayscale images depicting letters zero to nine.

  2. 2.

    The dataset has 7291 train and 2007 test images of 16 × 16 grayscale pixels depicting letters zero to nine.

  3. 3.

    In October 2006, Netflix provided a training data set of 100,480,507 ratings (ranging from 1 to 5 stars) that 480,189 users gave to 17,770 movies. The competition had a prize money of $1M for improving the root-mean square error of Netflix’ baseline Cinematch system by 10%. By June 2007, over 20,000 teams from over 150 countries had registered for the competition and of those, 2000 teams had submitted over 13,000 prediction sets. In June 2009, a team from AT&T Bell Labs won the Netflix competition [3].

  4. 4.

    See https://en.wikipedia.org/wiki/Bomberman for more details.

  5. 5.

    See https://opensim.stanford.edu/ for more details.

  6. 6.

    See https://tiny-imagenet.herokuapp.com/ for more details.

  7. 7.

    This metric is also known as the example based F-score with a beta of 2.

  8. 8.

    For example, http://automl.chalearn.org/.

References

  1. Claire Adam-Bourdarios, G Cowan, Cecile Germain-Renaud, Isabelle Guyon, Balázs Kégl, and D Rousseau. The Higgs machine learning challenge. In Journal of Physics: Conference Series, volume 664. IOP Publishing, 2015.

    Google Scholar 

  2. Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6: 14410–14430, 2018.

    Article  Google Scholar 

  3. Robert M. Bell and Yehuda Koren. Lessons from the Netflix prize challenge. SIGKDD Explorations, 9 (2): 75–79, 2007.

    Article  Google Scholar 

  4. Carl Boettiger. An introduction to Docker for reproducible research. ACM SIGOPS Operating Systems Review, 49 (1): 71–79, 2015.

    Article  Google Scholar 

  5. Andrew P Bradley. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition, 30 (7): 1145–1159, 1997.

    Article  Google Scholar 

  6. Jennifer Carpenter. May the best analyst win. Science, 331 (6018): 698–699, 2011.

    Article  Google Scholar 

  7. Corinna Cortes and Vladimir Vapnik. Support-Vector Networks. Machine Learning, 20 (3): 273–297, 1995.

    MATH  Google Scholar 

  8. Lehel Csató and Manfred Opper. Sparse on-line Gaussian processes. Neural Computation, 14 (3): 641–668, 2002.

    Article  Google Scholar 

  9. Sergio Escalera and Markus Weimer. The NIPS’17 Competition: Building Intelligent Systems. Springer, 2018.

    Book  Google Scholar 

  10. Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1 (4): 541–551, 1989.

    Article  Google Scholar 

  11. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. Technical report, OpenAI, 2018. URL https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.

  12. David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by back-propagating errors. Nature, 323 (6088): 533–536, 1986.

    Article  Google Scholar 

  13. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112, 2014.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sergio Escalera .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Herbrich, R., Escalera, S. (2020). A Guide to the NeurIPS 2018 Competitions. In: Escalera, S., Herbrich, R. (eds) The NeurIPS '18 Competition. The Springer Series on Challenges in Machine Learning. Springer, Cham. https://doi.org/10.1007/978-3-030-29135-8_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-29135-8_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-29134-1

  • Online ISBN: 978-3-030-29135-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics