OntoDBench: Novel Benchmarking System for Ontology-Based Databases

  • Stéphane Jean
  • Ladjel Bellatreche
  • Géraud Fokou
  • Mickaël Baron
  • Selma Khouri
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7566)


Due to the explosion of ontologies on the web (Semantic Web, E-commerce, and so on) organizations are faced with the problem of managing mountains of ontological data. Several academic and industrial databases have been extended to cope with these data, which are called Ontology-Based Databases (\(\mathcal{O}\mathcal{B}\mathcal{D}\mathcal{B}\)). Such databases store both ontologies and data on the same repository. Unlike traditional databases, where their logical models are stored following the relational model and most of properties identified in the conceptual phase are valuated, OBDBs are based on ontologies which describe in a general way a given domain; some concepts and properties may not be used and valuated and they may use different storage models for ontologies and their instances. Therefore, benchmarking \(\mathcal{O}\mathcal{B}\mathcal{D}\mathcal{B}\) represents a crucial challenge. Unfortunately, existing \(\mathcal{O}\mathcal{B}\mathcal{D}\mathcal{B}\) benchmarks manipulate ontologies and their instances with characteristics far away from real life applications in terms of used concepts, attributes or instances. As a consequence, it is difficult to identify an appropriate physical storage model for the target \(\mathcal{O}\mathcal{B}\mathcal{D}\mathcal{B}\), which enables efficient query processing. In this paper, we propose a novel benchmarking system called OntoDBench to evaluate the performance and scalability of available storage models for ontological data. Our benchmark system allows : (1) evaluating relevant characteristics of real data sets, (2) storing the dataset following the existing storage models, (3) expressing workload queries based on these models and (4) evaluating query performance. Our proposed ontology-centric benchmark is validated using the data sets and workload from the Lehigh University Benchmark (LUBM).


Binary Representation Query Performance Storage Model Query Response Time Benchmarking System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Harris, S., Gibbins, N.: 3store: Efficient Bulk RDF Storage. In: Proceedings of the 1st International Workshop on Practical and Scalable Semantic Systems (PSSS 2003), pp. 1–15 (2003)Google Scholar
  2. 2.
    Lu, J., Ma, L., Zhang, L., Brunner, J.S., Wang, C., Pan, Y., Yu, Y.: Sor: a practical system for ontology storage, reasoning and search. In: Proceedings of the 33rd International Conference on Very Large Data Bases (VLDB 2007), pp. 1402–1405 (2007)Google Scholar
  3. 3.
    McBride, B.: Jena: Implementing the RDF Model and Syntax Specification (2001)Google Scholar
  4. 4.
    Wu, Z., Eadon, G., Das, S., Chong, E.I., Kolovski, V., Annamalai, M., Srinivasan, J.: Implementing an Inference Engine for RDFS/OWL Constructs and User-Defined Rules in Oracle. In: Proceedings of the 24th International Conference on Data Engineering (ICDE 2008), pp. 1239–1248 (2008)Google Scholar
  5. 5.
    Broekstra, J., Kampman, A., van Harmelen, F.: Sesame: A Generic Architecture for Storing and Querying RDF and RDF Schema. In: Horrocks, I., Hendler, J. (eds.) ISWC 2002. LNCS, vol. 2342, pp. 54–68. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  6. 6.
    Pan, Z., Heflin, J.: Dldb: Extending relational databases to support semantic web queries. In: Proceedings of the 1st International Workshop on Practical and Scalable Semantic Systems (PSSS 2003), pp. 109–113 (2003)Google Scholar
  7. 7.
    Dehainsala, H., Pierra, G., Bellatreche, L.: OntoDB: An Ontology-Based Database for Data Intensive Applications. In: Kotagiri, R., Radha Krishna, P., Mohania, M., Nantajeewarawat, E. (eds.) DASFAA 2007. LNCS, vol. 4443, pp. 497–508. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  8. 8.
    Park, M.-J., Lee, J.-H., Lee, C.-H., Lin, J., Serres, O., Chung, C.-W.: An Efficient and Scalable Management of Ontology. In: Kotagiri, R., Radha Krishna, P., Mohania, M., Nantajeewarawat, E. (eds.) DASFAA 2007. LNCS, vol. 4443, pp. 975–980. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  9. 9.
    Erling, O., Mikhailov, I.: RDF Support in the Virtuoso DBMS. In: Conference on Social Semantic Web (CSSW 2007), vol. 113, pp. 59–68 (2007)Google Scholar
  10. 10.
    Bishop, B., Kiryakov, A., Ognyanoff, D., Peikov, I., Tashev, Z., Velkov, R.: OWLIM: A family of scalable semantic repositories. Semantic Web 2(1), 1–10 (2011)CrossRefGoogle Scholar
  11. 11.
    Abadi, D.J., Marcus, A., Madden, S.R., Hollenbach, K.: Scalable Semantic Web Data Management Using Vertical Partitioning. In: Proceedings of the 33rd International Conference on Very Large Data Bases (VLDB 2007), pp. 411–422 (2007)Google Scholar
  12. 12.
    Guo, Y., Pan, Z., Heflin, J.: LUBM: A benchmark for OWL knowledge base systems. Journal of Web Semantics 3(2-3), 158–182 (2005)CrossRefGoogle Scholar
  13. 13.
    O’Neil, P., O’Neil, E., Chen, X., Revilak, S.: The Star Schema Benchmark and Augmented Fact Table Indexing. In: Nambiar, R., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 237–252. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  14. 14.
    Carey, M.J., DeWitt, D.J., Naughton, J.F.: The oo7 benchmark. In: Proceedings of the ACM SIGMOD International Conference on Management of Data (SIGMOD), pp. 12–21 (1993)Google Scholar
  15. 15.
    Bressan, S., Li Lee, M., Li, Y.G., Lacroix, Z., Nambiar, U.: The XOO7 Benchmark. In: Bressan, S., Chaudhri, A.B., Li Lee, M., Yu, J.X., Lacroix, Z. (eds.) EEXTT and DIWeb 2002. LNCS, vol. 2590, pp. 146–147. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  16. 16.
    Wilkinson, K.: Jena Property Table Implementation. In: Proceedings of the 2nd International Workshop on Scalable Semantic Web Knowledge Base Systems (SSWS 2006), pp. 35–46 (2006)Google Scholar
  17. 17.
    Abadi, D., Marcus, A., Madden, S., Hollenbach, K.: Using the Barton libraries dataset as an RDF benchmark. Technical Report MIT-CSAIL-TR-2007-036. MIT (2007)Google Scholar
  18. 18.
    Bizer, C., Schultz, A.: The Berlin SPARQL Benchmark. Semantic Web and Information Systems 5(2), 1–24 (2009)CrossRefGoogle Scholar
  19. 19.
    Schmidt, M., Hornung, T., Lausen, G., Pinkel, C.: SP2Bench: A SPARQL Performance Benchmark. In: Proceedings of the 25th International Conference on Data Engineering (ICDE 2009), pp. 222–233 (2009)Google Scholar
  20. 20.
    Morsey, M., Lehmann, J., Auer, S., Ngonga Ngomo, A.-C.: DBpedia SPARQL Benchmark – Performance Assessment with Real Queries on Real Data. In: Aroyo, L., Welty, C., Alani, H., Taylor, J., Bernstein, A., Kagal, L., Noy, N., Blomqvist, E. (eds.) ISWC 2011, Part I. LNCS, vol. 7031, pp. 454–469. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  21. 21.
    Duan, S., Kementsietsidis, A., Srinivas, K., Udrea, O.: Apples and oranges: a comparison of rdf benchmarks and real rdf datasets. In: Proceedings of the 2011 International Conference on Management of Data (SIGMOD 2011), pp. 145–156 (2011)Google Scholar
  22. 22.
    Apweiler, R., Bairoch, A., Wu, C.H., Barker, W.C., Boeckmann, B., Ferro, S., Gasteiger, E., Huang, H., Lopez, R., Magrane, M., Martin, M.J., Natale, D.A., O’Donovan, C., Redaschi, N., Yeh, L.S.: Uniprot: the Universal Protein knowledgebase. Nucleic Acids Research 32, D115–D119 (2004)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Stéphane Jean
    • 1
  • Ladjel Bellatreche
    • 1
  • Géraud Fokou
    • 1
  • Mickaël Baron
    • 1
  • Selma Khouri
    • 1
  1. 1.LIAS/ISAE-ENSMA and University of PoitiersFuturoscope CedexFrance

Personalised recommendations