Advertisement

A Toolkit for Analysis of Deep Learning Experiments

  • Jim O’DonoghueEmail author
  • Mark Roantree
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9897)

Abstract

Learning experiments are complex procedures which generate high volumes of data due to the number of updates which occur during training and the number of trials necessary for hyper-parameter selection. Often during runtime, interim result data is purged as the experiment progresses. This purge makes rolling-back to interim experiments, restarting at a specific point or discovering trends and patterns in parameters, hyper-parameters or results almost impossible given a large experiment or experiment set. In this research, we present a data model which captures all aspects of a deep learning experiment and through an application programming interface provides a simple means of storing, retrieving and analysing parameter settings and interim results at any point in the experiment. This has the further benefit of a high level of interoperability and sharing across machine learning researchers who can use the model and its interface for data management.

References

  1. 1.
    PFA: Portable format for analytics (version 0.8.1). Technical report, Data Mining Group - PFA Working Group (2015)Google Scholar
  2. 2.
    Azevedo, A., Santos, M.F.: KDD, SEMMA and CRISP-DM: a parallel overview. In: Proceedings of IADIS European Conference on Data Mining 2008, Amsterdam, The Netherlands, 24–26 July 2008, pp. 182–185 (2008)Google Scholar
  3. 3.
    Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Blockeel, H.: Experiment databases: a novel methodology for experimental research. In: Bonchi, F., Boulicaut, J.-F. (eds.) KDID 2005. LNCS, vol. 3933, pp. 72–85. Springer, Heidelberg (2006). doi: 10.1007/11733492_5 CrossRefGoogle Scholar
  5. 5.
    Blockeel, H., Vanschoren, J.: Experiment databases: towards an improved experimental methodology in machine learning. In: Kok, J.N., Koronacki, J., Lopez de Mantaras, R., Matwin, S., Mladenič, D., Skowron, A. (eds.) PKDD 2007. LNCS, vol. 4702, pp. 6–17. Springer, Heidelberg (2007). doi: 10.1007/978-3-540-74976-9_5 CrossRefGoogle Scholar
  6. 6.
    Bottou, L.: Stochastic gradient learning in neural networks. In: Proceedings of Neuro-Nîmes. EC2 (1991)Google Scholar
  7. 7.
    Esteves, D., Moussallem, D., Neto, C.B., Soru, T., Usbeck, R., Ackermann, M., Lehmann, J.: MEX vocabulary: a lightweight interchange format for machine learning experiments. In: Proceedings of the 11th International Conference on Semantic Systems, pp. 169–176. ACM (2015)Google Scholar
  8. 8.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)Google Scholar
  9. 9.
    Grossman, R., Bailey, S., Ramu, A., Malhi, B., Hallstrom, P., Pulleyn, I., Qin, X.: The management and mining of multiple predictive models using the predictive modeling markup language. Inf. Softw. Technol. 41(9), 589–595 (1999)CrossRefGoogle Scholar
  10. 10.
    Keet, C., dAmato, C., Khan, Z., Lawrynowicz, A.: Exploring reasoning with the DMOP ontology. In: 3rd Workshop on Ontology Reasoner Evaluation (ORE 2014), vol. 1207, pp. 64–70 (2014)Google Scholar
  11. 11.
    Le, Q.V., Jaitly, N., Hinton, G.E.: A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941 (2015)
  12. 12.
    Lebo, T., Sahoo, S., McGuinness, D., Belhajjame, K., Cheney, J., et al.: PROV-O: the prov ontology. w3c recommendation, 30 April 2013. World Wide Web Consortium (2013)Google Scholar
  13. 13.
    Nurseitov, N., Paulson, M., Reynolds, R., Izurieta, C.: Comparison of JSON and XML data interchange formats: a case study. In: Caine 2009, pp. 157–162 (2009)Google Scholar
  14. 14.
    O’Donoghue, J., Roantree, M., Cullen, B., Moyna, N., Sullivan, C.O., McCarren, A.: Anomaly and event detection for unsupervised athlete performance data. In: Proceedings of the LWA 2015 Workshops, Trier, Germany, 7–9 October 2015, pp. 205–217 (2015)Google Scholar
  15. 15.
    Prechelt, L.: Early stopping — but when? In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 53–67. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-35289-8_5 CrossRefGoogle Scholar
  16. 16.
    Vanschoren, J., Blockeel, H., Pfahringer, B., Holmes, G.: Experiment databases. Mach. Learn. 87(2), 127–158 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Vanschoren, J., van Rijn, J.N., Bischl, B.: Taking machine learning research online with OpenML. In: Proceedings of the 4th International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications, pp. 1–4 (2015)Google Scholar
  18. 18.
    Vanschoren, J., Soldatova, L.: Exposé: an ontology for data mining experiments. In: International Workshop on Third Generation Data Mining: Towards Service-Oriented Knowledge Discovery (SoKD 2010), pp. 31–46 (2010)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Insight Centre for Data Analytics, School of ComputingDublin City UniversityDublinIreland

Personalised recommendations