Format

Send to

Choose Destination
J Chem Inf Model. 2017 Aug 28;57(8):1859-1867. doi: 10.1021/acs.jcim.6b00694. Epub 2017 Aug 7.

Shallow Representation Learning via Kernel PCA Improves QSAR Modelability.

Author information

1
Department of Bioengineering, Stanford University , Shriram Center, Room 213, 443 Via Ortega MC 4245, Stanford, California 94305, United States.

Abstract

Linear models offer a robust, flexible, and computationally efficient set of tools for modeling quantitative structure-activity relationships (QSARs) but have been eclipsed in performance by nonlinear methods. Support vector machines (SVMs) and neural networks are currently among the most popular and accurate QSAR methods because they learn new representations of the data that greatly improve modelability. In this work, we use shallow representation learning to improve the accuracy of L1 regularized logistic regression (LASSO) and meet the performance of Tanimoto SVM. We embedded chemical fingerprints in Euclidean space using Tanimoto (a.k.a. Jaccard) similarity kernel principal component analysis (KPCA) and compared the effects on LASSO and SVM model performance for predicting the binding activities of chemical compounds against 102 virtual screening targets. We observed similar performance and patterns of improvement for LASSO and SVM. We also empirically measured model training and cross-validation times to show that KPCA used in concert with LASSO classification is significantly faster than linear SVM over a wide range of training set sizes. Our work shows that powerful linear QSAR methods can match nonlinear methods and demonstrates a modular approach to nonlinear classification that greatly enhances QSAR model prototyping facility, flexibility, and transferability.

PMID:
28727421
PMCID:
PMC5942586
DOI:
10.1021/acs.jcim.6b00694
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for American Chemical Society Icon for PubMed Central
Loading ...
Support Center