Methodology of aiQSAR: a group-specific approach to QSAR modelling
Several QSAR methodology developments have shown promise in recent years. These include the consensus approach to generate the final prediction of a model, utilizing new, advanced machine learning algorithms and streamlining, standardization and automation of various QSAR steps. One approach that seems under-explored is at-the-runtime generation of local models specific to individual compounds. This approach was quite likely limited by the computational requirements, but with current increases in processing power and the widespread availability of cluster-computing infrastructure, this limitation is no longer that severe.
We propose a new QSAR methodology: aiQSAR, whose aim is to generate endpoint predictions directly from the input dataset by building an array of local models generated at-the-runtime and specific for each compound in the dataset. The local group of each compound is selected on the basis of fingerprint similarities and the final prediction is calculated by integrating the results of a number of autonomous mathematical models. The method is applicable to regression, binary classification and multi-class classification and was tested on one dataset for each endpoint type: bioconcentration factor (BCF) for regression, Ames test for binary classification and Environmental Protection Agency (EPA) acute rat oral toxicity ranking for multi-class classification. As part of this method, the applicability domain of each prediction is assessed through the applicability domain measure, calculated on the basis of the fingerprint similarities in each local group of compounds.
We outline the methodology for a new QSAR-based predictive tool whose advantages are automation, group-specific approach to modelling and simplicity of execution. Our aim now will be to develop this method into a stand-alone software tool. We hope that eventual adoption of our tool would make QSAR modelling more accessible and transparent. Our methodology could be used as an initial modelling step, to predict new compounds by simply loading the training dataset as an input. Predictions could then be further evaluated and refined either by other tools or through optimization of aiQSAR parameters.
KeywordsQSAR Local models Machine learning Applicability domain measure BCF Ames test Acute oral toxicity
applicability domain measure
Environmental Protection Agency
Food and Drug Administration
Cohen’s kappa coefficient
mean absolute error
NTP Interagency Center for the Evaluation of Alternative Toxicological Methods
quantitative structure–activity relationship
The use of quantitative structure–activity relationship (QSAR) methods has expanded significantly in recent decades [1, 2]. Consequently—and due to the success of early QSAR models—there is growing interest in these methods from the regulatory perspective, which in turn influences further QSAR development [3, 4]. Some of the driving factors are certainly the very high costs—financially, but also timewise —required for standard in vivo and in vitro regulatory tests, as well as ethical concerns related to the use of animals for in vivo tests .
Recent developments of QSAR methods show several characteristics. The first is the move towards an integrated approach in modelling . This means that the final output of a model is a consensus of several predictions, each obtained by a distinct QSAR approach [8, 9, 10]. This is further evidenced by the structure of community-wide modelling efforts, such as the ongoing initiative organized by the acute toxicity workgroup (ATWG) of the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) , which shifted the paradigm from the commonly used approach of selecting a best method, to an integrated approach, combining predictions (with different weights) of all models developed within the initiative.
The second characteristic is streamlining and automation of the workflow. The same shift can be observed in various fields that are emerging from big-data science, such as bioinformatics . Briefly, automation and standardization tools are increasing in number, aiming to streamline some of the most frequently used steps in QSAR development. These range from general tools for model development [13, 14], to procedures that automatically manipulate datasets  and consequently, tools to simplify data conversions and interactions between various platforms used in different steps . A very good outcome of these streamlining efforts is more transparency in QSAR model development, especially in terms of training set preparation  and methods for performance validation .
The last characteristic is the shift from simple, linear QSAR methods towards more sophisticated machine-learning algorithms, which were shown to perform significantly better [19, 20]. However, this comes at a cost. Linear regression provided a very straightforward way of interpreting the model since it was immediately obvious which descriptors contributed to prediction the most. For the majority of machine learning algorithms, getting the mechanistic interpretation is very hard—if not impossible. A similar issue is seen in many computer science fields, where the analogous “black box” behavior problem of these algorithms makes any backtracking of the mathematical procedure (and therefore the interpretation of results) unfeasible .
These are all established though under-explored methodologies . Some examples are the Food and Drug Administration (FDA) method of the T.E.S.T. software  and the lazar framework [24, 25]. Our aim is to incorporate these developments into a new QSAR-based tool: aiQSAR. The tool should be automated so that it takes as an input the dataset of training compounds, regardless of the endpoint, and predicts values for new, unknown compounds. As a mathematical procedure, we employ an array of machine learning algorithms and integrate their predictions into the final output. The focus of the tool is on building distinct, local-group based models, specific to each compound in the dataset and generated at-the-runtime. Therefore, our models are built “from scratch” for each compound in the dataset, used for a single prediction, and the procedure is repeated from the beginning for the next compound.
Step 1–Calculating descriptors 3839 descriptors from Dragon 7 software  are calculated for all TS and ES compounds. All 1D and 2D available descriptors are selected. An alternative source of descriptors can be used and involves importing a numerical matrix of descriptors (calculated for all TS and ES compounds).
Step 2–Calculating fingerprints Fingerprints are calculated for all TS and ES compounds by the R package “rcdk” . Two types of fingerprints, “pubchem” and “extended”, are obtained by the “get.fingerprint” function . Pubchem fingerprints are structural keys defined by the Pubchem database . Extended fingerprints are obtained through a published fingerprinting procedure  in which paths up to six chemical bonds of depth are considered and the resulting sequence is folded to the length of 1024 bits.
The following steps are carried out for each compound (referred to as “the target compound”) individually, for the training and the evaluation set:
Step 3–Calculating fingerprint similarities Tanimoto distance  is used as a measure of fingerprint similarities. The distance between the target compound and all TS compounds is calculated by the R package “fingerprint” .
Step 4a–Training set For a target compound in the TS, the local group of similar compounds is selected from the remainder of the TS after the target compound is removed.
Step 4b–Evaluation set For a target compound in the ES, the local group of similar compounds is selected from the whole TS.
Step 5–Filtering descriptors All descriptors with a missing value for any compound in the local group or for the target compound are discarded. Then zero-variance and near-zero-variance descriptors are removed using the “nearZeroVar” function from the R package “caret” . It should be emphasized that this procedure is carried out independently for each target compound, so different descriptors are likely to be selected each time.
Step 6–Predicting the target compound Several mathematical models are built from the local group and each model is used to predict the endpoint value of the target compound. In this step, the R package “caret” is used. Different methods are used depending on the endpoint type. Specific methods were selected based on their performance in an independent evaluation of a large collection of datasets (data not shown). The names given here correspond to the respective methods in the “caret” training function .
Step 6a–Regression For regression endpoints, the following methods are used: glmboost (Boosted Generalized Linear Model) , gaussprPoly (Gaussian Process with Polynomial Kernel) , glmnet (Lasso and Elastic-Net Regularized Generalized Linear Models) , rf (Random Forest) , extratrees (Random Forest by Randomization) .
Step 6b–Binary classification For binary endpoints, the following methods are used: glmboost, rf, extraTrees, LogitBoost (Boosted Logistic Regression) , svmLinearWeights (Linear Support Vector Machines with Class Weights) , LMT (Logistic Model Trees) , sdwd (Sparse Distance Weighted Discrimination) .
Step 6c–Multi-class classification For multi-class endpoints, the following methods are used: rf, extraTrees, LogitBoost, LMT, sdwd.
Step 7–Calculating the consensus value Predictions from all methods are combined into a single output value. Different procedures are used depending on the endpoint type.
Step 7a–Regression First, we discarded predictions from any of the five individual models that are more than 10% higher than the overall highest TS value, or more than 10% lower than the lowest TS value. The final prediction is calculated as a mean of the remaining values predicted by individual models.
Step 7b–Binary classification The final prediction is the majority vote out of the seven individual models.
Step 7c–Multi-class classification The final prediction is the most frequently predicted class out of the five individual models. In case of a tie, the class that is least represented in the TS (out of those that are tied) is selected.
Results and discussion
The workflow was tested on a number of datasets. Three examples will be discussed, one for each endpoint type (regression, binary classification and multi-class classification).
Bioconcentration factor-dataset statistics and performance measures. RMSE stands for Root-mean-square error; MAE stands for Mean absolute error
Number of compounds
Range of values
− 1.70 to 6.06
Ames mutagenicity test—dataset statistics, confusion matrix and performance measures
Environmental Protection Agency (EPA) ranking for the acute oral toxicity in rats was the endpoint tested for multi-class classification. This dataset was analyzed as part of the acute oral systemic toxicity modelling initiative launched by the NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM) . The objective of this initiative was predictive modelling of five endpoints related to acute toxicity and measured as rat oral LD50 : very toxic (binary classification), nontoxic (binary classification), LD50 point estimates (regression), EPA ranking (multi-class classification) and GHS ranking (multi-class classification). EPA’s hazard classification  splits chemicals into four hazard categories: category I (LD50 ≤ 50 mg/kg) is the highest toxicity category; category II (moderately toxic) includes chemicals with 50 < LD50 ≤ 500 mg/kg; category III (slightly toxic) includes chemicals with 500 < LD50 ≤ 5000 mg/kg. Safe chemicals (LD50 > 5000 mg/kg) are included in Category IV.
To start with, the training set of 8890 compounds (Additional file 4) was released to the community for model development related to the EPA toxicity ranking endpoint. Then the test set of 48,138 compounds was released and predictions for these compounds were requested for each of the five endpoints. Finally, the evaluation set of 2896 compounds (Additional file 5), which were an undisclosed subset of the test set and whose values for all endpoints were known, was used to assess the submitted predictions and rate the corresponding models. All models submitted were rated and ranked by the NICEATM panel on the basis of several performance statistics .
EPA acute oral toxicity ranking TS—statistics, confusion matrix and performance measures
EPA acute oral toxicity ranking ES—statistics, confusion matrix and performance measures
We have proposed a new methodology for predictive QSAR modelling based on local group selection during model development and at-the-runtime execution. The main feature of aiQSAR is compound-specific building of predictive models: for each target compound, only a local group of TS compounds that are structurally similar to the target is considered. We quantified this similarity in terms of the ADM, which is based on Tanimoto distances of molecular fingerprints. Three application examples are discussed: BCF for regression, Ames test for binary classification and EPA acute oral toxicity ranking for multi-class classification. The ADM positively correlated with performance metrics in all of them.
In general, we believe that this QSAR approach based on local group selection and modelling automation can raise the overall quality of in silico predictions. This could be especially important for certain endpoints with under-represented classes, where localization in a natural way, with no sampling required in balancing procedures, might improve the chemical space for QSAR predictions. We plan to fully develop this methodology into a stand-alone software tool that could easily be run on any dataset, without prior knowledge of the intricate details involved in QSAR modelling. This should help open up the field to a wider range of users.
KV, MG and EB developed the methodology. KV did the tests. KV and MG prepared the manuscript. All authors read and approved the final manuscript.
The authors would like to thank Kunal Roy, Ana Caballero, Alessandra Roncaglioni, Cosimo Toma and Alberto Manganaro for many helpful discussions.
The authors declare that they have no competing interests.
Availability of data and materials
All data generated or analyzed during this study are included as the additional information to the article.
This study was financially supported by the European Union, through Marie Skłodowska-Curie Action ‘in3’: MSCA-ITN-2016, Grant No. 721975.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
- 1.Gini G (2016) QSAR methods. In: Benfenati E (ed) In silico methods for predicting drug toxicity. Springer Science, New York, pp 1–20Google Scholar
- 19.Ruili H, Menghang X (2017) Editorial: Tox21 challenge to build predictive models of nuclear receptor and stress response pathways as mediated by exposure to environmental toxicants and drugs. Front Environ Sci 5:3Google Scholar
- 21.Gunning D (2016) Explainable artificial intelligence (XAI). In: Program information. U.S. Defense Advanced Research Projects Agency. https://www.darpa.mil/program/explainable-artificial-intelligence. Accessed 02 Jan 2019
- 23.Martin T (2016) User’s guide for T.E.S.T. (version 4.2) (toxicity estimation software tool). U.S. Environmental Protection AgencyGoogle Scholar
- 26.Kode (2017) DRAGON 7.0.8Google Scholar
- 28.https://cran.r-project.org/web/packages/rcdk/rcdk.pdf. Accessed 02 Jan 2019
- 29.ftp.ncbi.nlm.nih.gov/pubchem/specifications/pubchem_fingerprints.txt. Accessed 02 Jan 2019
- 32.https://cran.rstudio.com/web/packages/fingerprint/fingerprint.pdf. Accessed 02 Jan 2019
- 34.http://topepo.github.io/caret/available-models.html. Accessed 02 Jan 2019
- 38.Liaw A, Wiener M (2002) Classification and regression by randomForest. R News 2(3):18–22Google Scholar
- 39.Simm J, de Abril I, Sugiyama M (2014) Tree-based ensemble multi-task learning method for classification and regression. IEICE Trans Inf Syst 97:6Google Scholar
- 40.https://cran.r-project.org/web/packages/caTools/caTools.pdf. Accessed 02 Jan 2019
- 41.https://cran.r-project.org/web/packages/e1071/e1071.pdf. Accessed 02 Jan 2019
- 43.https://cran.r-project.org/web/packages/sdwd/sdwd.pdf. Accessed 02 Jan 2019
- 44.Benfenati E, Manganaro A, Gini G (2013) VEGA-QSAR: AI inside a platform for predictive toxicology. CEUR Workshop Proc 1107:21–28Google Scholar
- 49.U.S. National Archives and Records Administration (2005) Toxicity category. In: Code of Federal Regulations. Office of the Federal Register. www.govinfo.gov/content/pkg/CFR-2005-title40-vol23/pdf/CFR-2005-title40-vol23-sec156-64.pdf Accessed 02 Jan 2019
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.