Advertisement

Semantic Support for Recording Laboratory Experimental Metadata: A Study in Food Chemistry

  • Dena TahvildariEmail author
Conference paper
  • 1.5k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9088)

Abstract

A fundamental principle of scientific enquiry is to create proper documentation of data and methods during experimental research [1, 5].

Keywords

Food Chemistry Method Description Oxalic Acid Solution Basic Formal Ontology Metadata Record 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction and Research Objective

A fundamental principle of scientific enquiry is to create proper documentation of data and methods during experimental research [1, 5]. Providing the context of observations is essential for the understanding their meaning and for the reproduction of the experiment. Proper annotation also increases the likelihood of data being found and re-used by the same or other researchers [2]. Good scientific practice in the laboratory requires that lab reports describe the sequence of experimental activities executed in the lab. Moreover, method descriptions should include detailed information on the materials and equipment, analytical methods, parameter settings, lab conditions, failures and other details that facilitate reproduction of an experiment. This calls for adding descriptions (metadata) to research data and methods [3]. Most researchers consider the task of describing experimental details to be essential, but at the seen as time consuming and distracting from the ‘real research’. As a consequence, the documentation is often suboptimal.

Laboratory research is typically recorded in laboratory notebooks; they are indispensable sources for writing the laboratory methodological reports. Researchers are comfortable with paper notebooks and they are accepted as authoritative information sources and as legal documents. Paper lab notebooks offer simplicity and flexibility, but besides the risk of loss and deterioration they do not allow the information to be searched, shared and processed computationally [9, 28, 29]. Moreover, they cannot provide tooling for more efficient and effective recordings. The availability of computational environments for collecting and analyzing experimental data and the definition of digital formats for data have created persuasive incentives for the transition from paper to electronic lab notes. Powerful computing infrastructures have become a necessity to keep pace with the expanding volume of data and to retain control of the results. However, a survey by Downing et al., 2008 in the chemistry lab at Cambridge Imperial College, shows that most researchers make their notes on paper. In addition, they keep data on disparate systems that are linked to specific equipment. Moreover, researchers do not use any standards for writing descriptions of the experiments. They preserve the resulting documents on a variety of computing platforms and systems. These files are in many cases not interpretable for others because of the quality of the descriptions [4]. This leads to data loss and confusion for scientists who need to understand the experimental results and interpret how they were created.

It is evident that the current documentation practices in the lab are no longer efficient in the digital era. Digital recording will allow new ways to support this process. In particular, the use of semantic metadata will enable machines to interpret and integrate data generated by different sources (equipment, people, and repositories) in various formats. Our hypothesis is that the presence of vocabularies for annotating the context of a lab experiment, in return can contribute to an efficient and effective experimental documentation process. This line of reasoning is the main motivation in our research. The objective of the research is:

“to explore if and how the documentation task undertaken by scientists could be improved through the use of an ontology-based metadata capture supporting tool in the laboratory.”

The documentation task should be easy and efficient, but at the same time deliver high quality recordings. There is also a debate both inside and outside the scientific communities over the lack of reproducibility of experiments.

In this study, we first identify quality criteria in the domain for experimental documentation, starting with method descriptions found in literature. We evaluate in detail the methods reporting in a comprehensive set of laboratory experiments that should enable valid reproduction, integration and comparison of research procedures. In our work we focus on Food Chemistry, assuming that the outcomes will be valuable for other domains as well. Second, we define indicators to measure the efficiency of the documentation task as performed in the lab. We develop vocabularies to formally describe the domain knowledge. Finally, given the developed models and defined metrics we design a prototype tool that supposedly will assist researchers in efficiently producing high-quality lab notes. We will set up an intervention study to evaluate the tool and underlying hypothesis in the context of a Food Chemistry research group.

2 Related Work

The related work presented in this section concerns different domains within computer science studies, which are selected for the purpose of our research; (1) the domain of metadata quality, (2) the field of description logic/ontology engineering, and (3) the field of scientific workflow management systems.

1. In the literature research metadata is defined as “the data record that contains structured information about some resources. It describes, explains, locates, or otherwise makes it easier to retrieve, use, or manage an information resource [ 5 ].” In science, creating a high-quality metadata for research resources is important. The presence of metadata, if created accurately, can lead to the provision of more accurate methodological reports. Several research initiatives related with scientific metadata quality have been conducted [6, 7, 8, 11, 12, 13, 14]. These efforts approach the subject from diverse perspectives, trying to cover most of its different aspects.

Najjar, Ternier & Duval 2003, performed a statistical analysis on a sample of metadata records from various repositories and evaluate the usage of the standard [9]. Also, Crystal et al., 2005 reports on a study that investigated the ability of resource authors to create acceptable – quality metadata in an organizational setting using manual evaluation by experts [10].

Andy Brass, 2014 and his research team conducted a research on the quality of methods reporting in parasitology experiments. They defined a checklist of essential parameters that should be reported in methodology sections of scientific articles. They scored the number of those parameters that are reported for each publication. Interesting aspect of their research is that they used bibliometric parameters (impact factors, citation rate and h-index) to look for association between journal and author status and the quality of method reporting [38]. Their results indicate that the “bibliometric parameters were not correlated with the quality of method reporting” (Spearman’s rank correlation coefficient < −0.5; p > 0.05). They concluded that the quality of methods reporting in experimental parasitology is a source of concern and it has not enhanced over time, despite their being evidence that most of the assessed parameters do influence the results. They proposed set of parameters to be used as guidelines to improve the quality of the reporting of experimental infection models as a requirement for comparing datasets.

Finally, some initiatives, such as the Minimum Information About a Microarray Experiment (MIAME) [39] and the Minimum Information About a Proteomics Experiment (MIAPE) [40], have been used by several journals such as the Journal of Proteomics, as a condition for publication.

2. Ontologies have been presented as a possible solution for expressing metadata. They satisfy metadata requirements and are capable of representing the specific semantics of each research domain. In the biomedical domain, the ontology for Biomedical Investigation (OBI) helps to model the design of investigations, including the protocols, materials used, instruments used, the data generated and the types of analysis performed on them [15, 17]. OBI is an extension of the Basic Formal Ontology (BFO)1 as the upper-level ontology as a means to describe general entities that do not belong to a specific problem domain. Therefore, all OBI classes are a subclass of some BFO class. The ontology has the scope of modeling all biomedical investigations and as such contains ontology terms for aspects such as:
  • Biological material – such as plasma,

  • Instrument – such as DNA microarray, and centrifuge,

  • Actions of an experiment and sub steps of the experiment such as electrophoresis material separation,

  • Data processing - for example Principle Component Analysis.

Biomedical experimental processes involve numerous sub-processes, involving experimental materials such as organisms, and cell cultures. These experimental materials are represented as subclasses of the BFO class material entity. OBI uses BFO’s material entity as the basis for defining physical elements. To assess the use of OBI for annotation they used it in an automated functional genomics investigation with Robot Scientist [31]. The robot requires a complete and precise description of all experimental actions, and this use case demonstrates how OBI was able to provide elements of such a description. The general ontology of scientific experiments EXPO [16], also intends to formally describe the domain-independent knowledge about planning, actions and analysis of scientific experiments. EXPO formalized the generic concepts of experimental design such as Methodology and results representation and it links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments. EXPO is expressed in the OWL-DL.2 This ontology has the class expo:Experimental_protocol and describes some of its properties, expo:has_applicability, expo:has_goal, expo:has_plan. The level of granularity makes EXPO unwieldy for usage in an operational data management workflow. The last vocabulary, which can be relevant to our research, is OM (Ontology of units of Measure and related concepts). Haijo Rijgersberg et al., 2009, developed OM to facilitate a transparent exchange and process of quantitative information [18]. OM is expressed in OWL. They have designed applications to test the usefulness of OM and its services. First, a web application that checks for the consistency in dimension and unit of formulas. Second, an add-in Microsoft Excel that assists in data annotation and unit conversion [19].

The existing ontologies, in particular OBI, could be applicable in our approach for developing vocabularies in the food chemistry domain. We intend to design vocabularies by building upon the existing ontologies using OWLstandards.

One of the main areas that the description logic-based ontology can be helpful is its use development time activity. The idea is that, using ontologies, we can have hierarchy of the domain knowledge assembled in the system, and it can identify the existing inconsistent, and incoherence in the descriptions. Also, we can use ontology as a mechanism that can provide metadata suggestions for researchers (decision support tool).

3. Jeremy Frey, 2004 is one of the pioneers in the domain of laboratory automation, who specifically investigates the use of semantic technologies in laboratory data capture and re-use for chemical labs [23, 24, 25, 27]. Also, one of the most relevant references in this field is the paper by Hughes et al. (2004) [27]. They have developed an innovative human-centered system, which captures the process of a chemistry experiment from plan to execution. This system comprises an electronic lab book, which has been successfully trialed in a synthetic organic chemistry laboratory, and a flexible back-end storage system (using RDF technologies). They took the “design-by-analogy” research approach in a close collaboration with chemists to develop the “MyteaExperiment” planner [27]. Similarly, LabTrove is a social network system to facilitate the association of the data to the proposed scientific elements at the point of creation (annotation at source) rather than by annotating the data with commentary after the experiment has taken place [34]. The LabTrove application was designed to help researchers to share their experimental plans, thoughts, observations and achievements with the wider online community in a semantically rich and extensible manner. Using the application, scientists will no longer have to print out data results to insert into conventional lab books; instead, results will be logically associated with the experiment and therefore they become accessible as desired. And the last knowledge management application related to our work is Tiffany [30], which is specifically designed for laboratory studies. Tiffany model is a refinement of the W3C PROV-O3 model for provenance, combining the ability to trace back the workflow with extra information, which is useful for researchers, such as the type of activity and the research question being investigated. Tiffany is used at Wageningen UR to help food domain researchers in giving structure to their research workflow and facilitate:
  • Good archiving.

  • Re-use.

  • Knowledge transfer.

  • Serendipity.

LabTrove and Tiffany are the main workflow management systems that we consider to study in detail to check for their applicability to our approach. Finally, an inspiring reference that could help us in understanding the laboratory life, which is considered as a reference to laboratory scientists’ behavior, is the work done by the eminent French philosopher Bruno Latour. He observed the laboratory scientists within the period of two years and described the process that scientists undertake for conducting an experiment and developing scientific facts in the laboratory [37]. This source could help us to get insight on the behavioral features of laboratory scientists, which could be a valuable knowledge in the application design phase.

3 Research Problem and Research Questions

The problem statement that motivates our research is:

Inefficiencies in capturing the context of the experimental procedures by laboratory scientists within the physical lab lead to the poor documentation of the research process and ultimately result in the provision of inadequate methodological reports.”

The problem is rooted in several factors. First, there are factors such as motivation and gratification for describing experimental methods in detail. Although documentation is an integral part of scientific research, it is a labor intensive and cumbersome activity for scientists. Researchers are reluctant to allocate time to record, sufficiently annotate and share the context of observations in the lab. Secondly, in many cases, researchers simply do not know what kind of information is valuable for recording; for example, are room lighting and room temperature important information to be recorded? There is no single approach to do this, it varies from one experiment to the other and depends on the intended use of the recordings. For example, a SOP (Standard Operation Procedure) will be used as a well-defined detailed description for potentially many (unknown) users, whereas a simple ‘Friday afternoon trial’ will only be have to be understood by a small number of researchers. Another source of additional effort by the researcher is the fact that many laboratory instruments are not yet integrated into a digitized workflow of a lab researcher, especially in academic research institutes and universities. Therefore, often researchers have to enter and transfer data and the associated descriptions more than once – from equipment measurements to their notebook, from their notebooks into digital formats and files, from personal computers to institutional repositories, etc.

The problem has costly impact on (1) the reproducibility, and (2) the traceability of laboratory data and methods.

We propose the following research question to address the above problem.

“Do semantic technologies and their applications contribute to the efficiency of the documentation task and improve the quality of experimental metadata provided by the laboratory researchers in the domain of Food Chemistry?”

To answer the main research question, we propose the following sub research questions:
  • RQ1 – What are quality criteria for experimental methodological reports?

  • RQ2 What are the influencing variables that stimulate reproducibility of the laboratory experimental procedure?

  • RQ3 – What are measurable indicators for the efficiency of the documentation task in the lab?

  • RQ4 – Which ontologies are required to annotate the context of experimental methods in the domain of food chemistry?

  • RQ5 – Which ontology-based supporting tools can help laboratory researchers in annotating their experimental data and methods in an efficient and effective way?

4 Scope of the Research

In this research we only focus on improving the documentation task within the environment of physical laboratory or as what the domain scientists name it the “wet lab”.

We are aware of the fact that we need to make a clear choice for the user of our applications, because it affects our choices when conducting experiments. We assume that the primary users of our software are human researchers (Robot Scientist is out of the scope of this research), who work in the academic institute of food sciences. We consider that they have the background knowledge and the expertise of working in the lab. We specifically, target the doctoral students, technicians and senior researchers from the food chemistry domain.

Stimulating the “exact reproducibility of findings” is not the main focus of our research. However, we argue that improving quality the methodological reporting could influence some acceptable levels of “reproducibility of the experimental procedures” in lab experiment.

Finally, in creating metadata quality criteria, we are completely aware that it is not feasible to identify all the information from the context of the lab, since the big amount of knowledge in the lab is categorized as tacit knowledge. Within this context, elicitation and measurement of tacit knowledge in laboratory environments are concepts that we take into consideration in our approach.

5 Research Methodology

For each of the research questions, we explore the literature to find theories, techniques and best practices developed for the same or other domains. To define quality criteria for method descriptions we consult the literature and make interviews with scientists in the field, initially open but gradually more specific. We use computational text analysis tools to detect the characteristics of method descriptions in food chemistry publications. The combined findings from these investigations help us to propose quality indicators. Given the theoretical framework and the development of a coding schema (ontology), we refine our propositions through structured interviews within a larger number of scientists. In parallel, we use text analysis techniques to analyze the content of method sections in published articles in the domain. We are interested to find the most frequent and co-occurring words in the corpus and classify terms in several topics. This phase is the initial step in understanding the domain knowledge that is used when describing methods and for defining supporting vocabularies. The concepts and vocabularies will be used at the later stage of the research in tools.

Our present dataset for text analysis is a corpus containing 241 method sections from 9 different scientific journals in food chemistry domain that are published in the period of 2000 to 2014. We used the Python programming language for pre-processing the sections. We did our analysis using R-Studio programming language. Next, we consult the domain experts in food chemistry. We ask them to (1) to give general quality criteria they use for judging method descriptions, depending on the intended use, (2) to comment on the quality of a number of specific method descriptions given to them and (3) comment on the results from the text analysis (term frequency analysis and topic models) as representations of method descriptions in general. Based on these results, we aim to develop quality criteria, which then will be evaluated through an experiment with scientists. An option here is that we select a number of method sections that score either very low or very high on our quality indicators, and have the researchers classify all sections as well.

For RQ3 we interview and observe researchers, in order to identify efficiency criteria for the documentation task, for example the time needed for the task of associating metadata experimental data.

For RQ4 we take the NeOn methodology to develop vocabularies. The review of NeOn methodology reported in the work by Garcia et al., 2011 [26, 32]. The steps are:
  1. 1.

    Preparation,

     
  2. 2.

    Conceptualization,

     
  3. 3.

    Knowledge acquisition and Domain Analysis,

     
  4. 4.

    Semantic analysis,

     
  5. 5.

    Building ontology and validation,

     
  6. 6.

    Evaluation.

     

We use the ontology development tool ROC + in order to allow food chemists to setup the initial vocabularies and verify these by checking automatically generated annotations of the selected method sections [33].

Finally, for RQ5, we design and build an ontology-based application that aims to support experimental scientists in creating high-quality method descriptions, through software engineering techniques. To accomplish this goal we first need to identify the commonly used applications in the workflow of scientists. In this way we can design an application that can be integrated into the present way of working and contribute to the efficiency. By separating aspects that are specific for the food chemistry domain from general design decision we aim to gain insight that is applicable to experimental science in general.

6 Preliminary Findings (RQ1)

Through a bottom-up approach we analyzed our data to learn about the domain, the structure of the method descriptions, terminology used, and the nature of experiments in food chemistry. For this, we first manually reviewed the method descriptions of a few numbers of papers to get a basic understanding of the field. We focused our analysis on identifying necessary and sufficient information for reporting methods. Moreover, we tried to define categories to classify the knowledge into concepts such as equipment, reagent, and actions. This helps us to compare our categorization with terminology used by researchers. We used NVivo software for qualitative data analysis because of the unstructured nature of our dataset. In addition to manual analysis, we used Natural Language Processing techniques such as term frequency analysis and topic modeling in R to learn more about underlying meanings in the text. From our inspections, we detected the workflow aspects in method descriptions. The sequence of actions was implicit and was dependent on the requirements of the respective journals. However, we could find commonalities in the structure of these descriptions as most of them had an input-output structure. From the manual analysis, we identified that two main elements are visible in these workflows, (1) experimental actions and (2) experimental objects.

Actions in the descriptions were usually presented by verbs; most of the experimental actions were described implicitly and accurate information (metadata) for implementing the action was not always available. For instance, structures such as “Transfer approximately 9 mL oxalic acid solution” or “Use dry cuvettes to mix and read on a spectrophometer at 440 nm against CHM solvent” repeatedly occurred in our dataset. Domain experts are usually required in order to interpret the information in the descriptions. The results from the term frequency analysis and topic modeling are available at https://gist.github.com/denatahvildari. We will communicate these results with the domain scientists to be able to analyze them.

7 Research Evaluation Plan

The overall evaluation of the research will be accomplished through using the ontology in applications and assessing the results in real-life experiments (application-based evaluation). We will setup intervention studies with scientists to assess how well the ontology-based tools improve the efficiency of the documentation task and improvement in providing qualified methodological reports. To evaluate the ontologies as such, we will follow the practice introduced by Gomez-Perez et al. [22], which suggest the following assessment criteria (this method is also used and reported in [26]):
  1. 1.

    Consistency.

     
  2. 2.

    Completeness.

     
  3. 3.

    Conciseness.

     

Further more, to determine the usefulness of the representations, the important question is to find out if the representations (ontologies) are sufficient. For example, through an experiment we can evaluate the sufficiency of ontologies by asking researchers to create methodology description using the representations. Afterwards, we ask other researcher to reproduce the experimental process using the previous protocol, and create another description. If we can show that the two descriptions about the same experiment are equivalent, the validity of the ontologies can be tested.

The validation of the proposed tooling is that, other researchers can use the information of high-quality descriptions to replicate an experimental setup. To measure this, we aim to set up an experiment. For example, we take two groups of researchers. We ask researchers in one group to conduct an experiment and write down laboratory method reports while using metadata indications that we previously identified. Next, we ask other researchers from the other group use these information sources and try to set up the experiment. We then measure and discuss the success of the reproduction task based on the expertise and the levels of accomplishments in completing the reproducibility.

8 Discussion

In this research we aim to design tool to help researchers to record their experimental metadata in an efficient way. We argue that by designing ontology-based metadata record applications for the context of the lab, we contribute to the development of electronic laboratory notebooks and in return we promote the provision of the high methodological reporting. Ultimately, with having sufficient information about the experimental procedure the reproducibility could positively be influenced. However, inspired by the research by Vasilevsky et al. 2013, we agree on the point that the “identifiability of the research resources” [36] in a specific research domain is a prime necessity. Otherwise, the successful reproducibility cannot be achieved. They conducted a research to investigate the “identifiability” of the resources in biomedical research domain from publications. Based on their result, 54\,% of the research resources in this domain are not uniquely identifiable in research publication. However, they didn’t check whether adding identifability was enough to get reproducibility. We assume, through our approach, we find other variables in addition to resource identifiability that could affect the scientific reproducibility such as “the experimenter’s awareness about the domain metadata”.

Another point for discussion in our research comes from the observation by Drummond, 2009 [35]. He argues that reproducibility is different than replicability; his claim is: “reproducibility requires changes, while replicability avoids them” [35]. He further argues that scientific replication does not worth all the great deal of extra work incurred by the researcher. His article points at a valid point that in any case, the full replication of the previous experiment is not achievable, since the experiment is being carried out by another researcher, in another laboratory, with different equipment. He concludes that reproducibility covers a wide rang and replication falls at one end of this range. We argue that replicability is the exact repetition of an experiment to obtain the same results; while reproducibility is the repetition of an experiment with small adjustment and modification, e.g. changes that will unavoidably occur when undertaking the same experiment in different laboratories. Our main interest in this research is to identify the variables that stimulate the reproduction of the “lab experimental processes” not the “results”. Our main statement is that if results are replicable but the experimental process is not reproducible, they may be of little value because they are likely to be characterized to the precise conditions used in an experiment (for example, the use of a scarce sample or equipment, that only certain laboratories are authorized to work with). The information reported in the material and methods section of an article plays a fundamental role in achieving this aim.

We think that even the weakest version of reproduction has some values. Fore example, among laboratory researchers, the term “technical reproducibility” is a very well known one. The term indicates that every laboratory experiment should be carried out in duplications to be checked for the validity of the procedure – researchers calculate the coefficient of variation of the duplicated experiments. If this ratio is greater that 10 %, they need to re-do the experiment. This, in fact emphasizes on the value of reproducibility, comparability and their prerequisites. Despite the extra effort involved in the documentation task, we claim that if researchers are aware of essential metadata of their domain, and if they are equipped with efficient tools that support them at the development time, at least the “technical reproducibility” is achievable.

We know that achieving general agreement on standards, particularly metadata vocabularies, is a challenge in most of the disciplines. We also think that a solution for defining lab metadata is much more than just a technical challenge. Motivating scientists to use terms from controlled vocabularies by providing tools that use those terms is not straightforward.

Footnotes

Notes

Acknowledgement

This PhD research is a joint project between the Computer Science Department of Vrije University of Amsterdam and the Wageningen UR. The author acknowledges the help of Prof. Jan Top in shaping and writing the manuscript and appreciates the insights from Prof. Guus Schreiber and Prof. Bijan Parsia.

References

  1. 1.
    Baranova, A., Campagna, S.R., Chen, R., et al.: Toward more transparent and reproducible omics studies through a common metadata checklist and data publications. Omics: A J. Integr. Biol. 18(1), 10–14 (2014)CrossRefGoogle Scholar
  2. 2.
    Agosti, M., Ferro, N., Frommholz, I., Thiel, U.: Annotations in digital libraries and collaboratories – facets, models and usage. In: Heery, R., Lyon, L. (eds.) ECDL 2004. LNCS, vol. 3232, pp. 244–255. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  3. 3.
    World Health Organization. Handbook: good laboratory practice (GLP): quality practices for regulated non-clinical research and development. World Health Organization (2010)Google Scholar
  4. 4.
    Downing, J., Murray-Rust, P., Tonge, A.P., Morgan, P., Rzepa, H., Cotterill, F., Day, N., Harvey, M.: SPECTRa: the deposition and validation of primary chemistry research data in digital repositories. J. Chem. Inf. Model. 48(8), 1571–1581 (2008)CrossRefGoogle Scholar
  5. 5.
    Jones, M.B., Berkley, C., Bojilova, J., Schildhauer, M.: Managing scientific metadata. Internet Comput. 5(5), 59–68 (2001)CrossRefGoogle Scholar
  6. 6.
    Moulaison, H., Felicity D.: Metadata quality in digital repositories (2014)Google Scholar
  7. 7.
    Park, J.-R.: Metadata quality in digital repositories: A survey of the current state of the art. Cataloging Classif. Q. 47(3–4), 213–228 (2009)CrossRefGoogle Scholar
  8. 8.
    Robertson, R.J.: Metadata quality: implications for library and information science professionals. Libr. Rev. 54(5), 295–300 (2005)CrossRefGoogle Scholar
  9. 9.
    Najjar, J., Stefaan, T., Duval, E.: The actual use of metadata in ARIADNE: an empirical analysis. In: Proceedings of the 3rd Annual ARIADNE Conference, pp. 1–6 (2003)Google Scholar
  10. 10.
    Abe, C., Greenberg, J.: Usability of a metadata creation application for resource authors. Libr. Inf. Sci. Res. 27(2), 177–189 (2005)CrossRefGoogle Scholar
  11. 11.
    Mitchell, E.T.: Metadata literacy: an analysis of metadata awareness in college students. Ph.D. dissertation, University of North Carolina at Chapel Hill (2009)Google Scholar
  12. 12.
    Dushay, N., Hillmann, D.I.: Analyzing metadata for effective use and re-use. In: DCMI Metadata Conference and Workshop, Seattle, USA (2003)Google Scholar
  13. 13.
    Moen, W.E., Stewart, E.l., McClure, C.L.: The role of content analysis in evaluating metadata for the U.S. government information locator service (gils): Results from an exploratory study (1997). <http://www.unt.edu/wmoen/publications/GILSMDContentAnalysis.htm>. Accessed March 2013
  14. 14.
    Hughes, G., Mills, H., De Roure, D., Frey, J.G., Moreau, L., Smith, G., Zaluska, E., et al.: The semantic smart laboratory: a system for supporting the chemical escientist. Org. Biomol. chem. 2(22), 3284–3293 (2004)CrossRefGoogle Scholar
  15. 15.
    Brinkman, R., Courtot, M., Derom, D., Fostel, J.M., He, Y., Lord, P.W., Malone, J., et al.: Modeling biomedical experimental processes with OBI. J. Biomed. Semant. 1(S-1), S7 (2010)CrossRefGoogle Scholar
  16. 16.
    Soldatova, L.N., et al.: An ontology of scientific experiments. J. R. Soc. Interface 3(11), 795–803 (2006)CrossRefGoogle Scholar
  17. 17.
    Courtot, M., et al.: The OWL of Biomedical Investigations in OWLED Workshop in the International Semantic Web Conference (ISWC), Karlsruhe, Germany (2008)Google Scholar
  18. 18.
    Rijgersberg, H., Top, J., Meinders, M.: Semantic support for quantitative research processes. IEEE Intell. Syst. 24(1), 37–46 (2009)CrossRefGoogle Scholar
  19. 19.
    Rijgersberg, H., van Assem, M., Top, J.: Ontology of units of measure and related concepts. Semant. Web 4(1), 3–13 (2013)Google Scholar
  20. 20.
    Suarez-Figuera, M.C.: Ontology Engineering in a Networked World, Xii, p. 444. Springer, Berlin (2012)Google Scholar
  21. 21.
    Garcia-Castro, A.: Developing Ontologies in the Biological Domain in Institute for Molecular Bioscience, p. 275, University of Queensland, Queensland (2007)Google Scholar
  22. 22.
    Gomez-Perez, A.: Evaluation and assessment of knowledge sharing technology. In: Mars, N.J.I. (ed.) Towards Very Large Knowledge Bases: Knowledge Building & Knowledge Sharing, pp. 289–296. IOS Press, Amsterdam (1995)Google Scholar
  23. 23.
    Coles, S.J., Frey, J.G., Bird, C.L., Whitby, R.J., Day, A.E.: First steps towards semantic descriptions of electronic laboratory notebook records. J. Cheminform. 5(1), 52 (2013)CrossRefGoogle Scholar
  24. 24.
    Frey, J.G.: Curation of laboratory experimental data as part of the overall data lifecycle. Int. J. Digit. Curation 3(1), 44–62 (2008)CrossRefMathSciNetGoogle Scholar
  25. 25.
    Frey, J.G.: The value of the semantic web in the laboratory. Drug Discov. Today 14(11), 552–561 (2009)CrossRefMathSciNetGoogle Scholar
  26. 26.
    Alexander, G., Giraldo, Olga., Garcia, J.: Annotating experimental records using ontologies. In: International Conference on Biomedical Ontology, Buffalo, NY, USA (2011)Google Scholar
  27. 27.
    Hughes, G., Mills, H., Smith, G., Frey, J., et al.: Making tea: iterative design through analogy. In: Proceedings of the 5th Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, pp. 49–58. ACM (2004)Google Scholar
  28. 28.
    Klokmose, C.N., Zander, P.O.: Rethinking laboratory notebooks. In: E. Kolker, V. Ozdemir, L. Martens,W. Hancock, G. Anderson, N. Anderson, S. Aynacioglu (2010)Google Scholar
  29. 29.
    Nussbeck, S.Y., Weil, P., Menzel, J., Marzec, B., Lorberg, K., Schwappach, B.: The laboratory notebook in the 21st century. EMBO reports (2014)Google Scholar
  30. 30.
    Top,.J., Broekstra, J.: Tiffany: sharing and managing knowledge in food science. Keynote in ISMICK, Brazil (2008)Google Scholar
  31. 31.
    Soldatova, L., et al.: An ontology for a Robot Scientist. Bioinformatics (Special issue for ISMB) 22(14), –e471 (2006)Google Scholar
  32. 32.
    Suárez-Figueroa, M.C.: NeOn Methodology for building ontology networks: specification, scheduling and reuse. Diss. Informatica (2010)Google Scholar
  33. 33.
    Koenderink, N.J.J.P., van Assem, M., Hulzebos, J., Broekstra, J., Top, J.L.: ROC: a method for proto-ontology construction by domain experts. In: Domingue, J., Anutariya, C. (eds.) ASWC 2008. LNCS, vol. 5367, pp. 152–166. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  34. 34.
    Milsted, A.J., et al.: LabTrove: a lightweight, web based, laboratory ”Blog” as a route towards a marked up record of work in bioscience research laboratory. PloS one 8(7), e67460 (2013)CrossRefGoogle Scholar
  35. 35.
    Drummond, C.: Replicability is not reproducibility: nor is it good science (2009)Google Scholar
  36. 36.
    Vasilevsky, N.A., Brush, M.H., Paddock, H., Ponting, L., Tripathy, S.J., LaRocca, G.M., Haendel, M.A.: On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ 1, e148 (2013)CrossRefGoogle Scholar
  37. 37.
    Latour, B., Woolgar, S.: Laboratory Life: The Construction of Scientific Facts. Princeton University Press, Princeton (2013)Google Scholar
  38. 38.
    Flórez-Vargas, O., Bramhall, M., Noyes, H., Cruickshank, S., Stevens, R., Brass, A.: The quality of methods reporting in parasitology experiments. PLoS ONE 9(7), e101131 (2014)CrossRefGoogle Scholar
  39. 39.
    Brazma, A., Hingamp, P., Quackenbush, J., Sherlock, G., Spellman, P., Stoeckert, C., Vingron, M.: Minimum information about a microarray experiment (MIAME)—toward standards for microarray data. Nat. Genet. 29(4), 365–371 (2001)CrossRefGoogle Scholar
  40. 40.
    Taylor, C.F., Paton, N.W., Lilley, K.S., Binz, P.A., Julian, R.K., Jones, A.R., Hermjakob, H.: The minimum information about a proteomics experiment (MIAPE). Nat. Biotechnol. 25(8), 887–893 (2007)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Business Web and Media GroupVrije University of AmsterdamAmsterdamThe Netherlands

Personalised recommendations