top of page
Gradient
Search

Interpretation of Structure–Activity Relationships in Real-World Drug Design Data Sets Using XAI

Interpretation of Structure–Activity Relationships in Real-World Drug Design Data Sets Using Explainable Artificial Intelligence


In silico models based on Deep Neural Networks (DNNs) are promising for predicting activities and properties of new molecules. Unfortunately, their inherent black-box character hinders their understanding, as to which structural features are important for activity. However, this information is crucial for capturing the underlying structure–activity relationships (SARs) to guide further optimization. To address this interpretation gap, “Explainable Artificial Intelligence” (XAI) methods recently became popular. Herein, they apply and compare multiple XAI methods to projects of lead optimization data sets with well-established SARs and available X-ray crystal structures. As they can show, easily understandable and comprehensive interpretations are obtained by combining DNN models with some powerful interpretation methods. In particular, SHAP-based methods are promising for this task. A novel visualization scheme using atom-based heatmaps provides useful insights into the underlying SAR. It is important to note that all interpretations are only meaningful in the context of the underlying models and associated data.



The search for novel, potent, and well-tolerated molecules as actives for a particular target protein should ultimately result in new candidates for clinical development. Many different aspects must be considered in a multiparametric fashion, when designing new molecules matching a particular target profile. Here, computer-assisted drug design (CADD) with modern in silico methods can significantly guide project teams to accomplish this task.


Example for heatmap visualization for the fXa inhibitor. A heatmap with a cutoff is shown with the corresponding color bar. Green values represent structural features, which are favorable for activity, and red values represent detrimental features, respectively.


Advanced machine learning techniques are nowadays routinely applied to develop statistical models from experimental data in order to guide further compound design. Different application areas of these models include de novo design by generative artificial intelligence (AI) methods, synthesis prediction and retrosynthetic analysis as well as prediction of molecular properties. These property predictions include ADMET (absorption, distribution, metabolism, elimination, toxicity) and physicochemical properties (such as logP or solubility) as well as biological activity for the target protein (enzymatic and cellular activity) or antitargets.


X-ray crystal structure of a representative indole-3-carboxamide (orange carbons, 2.5 Å resolution,in complex with human renin resulting from structure-based optimization. The protein binding site is displayed with gray carbon atoms. Key protein–ligand interactions are indicated as follows: hydrogen bonds in yellow, π–π interactions in cyan, and salt bridges in purple. All 3D figures were generated using the program Maestro.


X-ray crystal structure of a representative indole-2-carboxamide (green carbons, 2.2 Å resolution, in complex with human factor Xa. The protein binding site is displayed with gray carbon atoms. Water molecules are shown with red oxygen spheres.


With the steadily increasing popularity of AI and deep learning methods, deep neural networks (DNNs) are very often employed as advanced statistical engines for property predictions. The term DNN refers to complex nonlinear statistical models consisting of neural nets with more than one hidden layer for prediction. Faster computer hardware, improved algorithms to avoid neural network overfitting, plus the availability of software solutions on many computer platforms with improved training algorithms fueled successful applications of DNNs in AI areas, such as computer vision, language processing, but also in drug design, as demonstrated by numerous successful examples.


While several studies explore the potential advantage of DNN methods over classical machine learning (ML) approaches, a missing feature for DNNs is interpretation or “explainability” of resulting models. Most AI-based machine learning methods are inherently built as black boxes, and the design of the algorithm typically does not allow for an understanding, as to why a certain method produces a certain prediction. Therefore, one cannot explore why good or erroneous predictions are obtained, and a method cannot be improved in a focused manner. Often, the acceptance of users (e.g., medicinal chemists) is lower, if no explanation in structural terms can be provided, and interpretable predictions might provide better guidance for prospective design by inspiring novel chemical transformations to lead structures.


Current solutions to this problem employ model-independent approaches to detect important structural features, such as “response maps” for detection and visualization of local property gradients based on structure fragmentation and derivatization. This method involves the incremental modification of structures by systematically removing or adding small substituents. Resulting predictions for a molecule and virtual matched pairs serve to estimate the impact of small substitution to the predicted property, following the work of Polishchuk et al.where a review of classical methods for model explanations is also available. Nowadays, there are several methods introduced to obtain model explanation from deep neural networks. These methods include, for example, frameworks like LIMEor SHAP, which can provide explanations for neural networks with fixed size inputs, as well as more general methods, based, for example, on gradients or backpropagation. Gradient-based methods provide explanations by calculating the influence on the final prediction by a change in one feature. In contrast, backpropagation-based approaches are propagating the output back to the input values to estimate the influence of these features. Recently, the integrated gradients explainable artificial intelligence (XAI) approach for interpretation of graph-based neural networks was described and used for interpretation of four ADMET related data sets. Applications in drug design up to now mainly focused on extracting single features in molecules, that contribute to a property, as well as recently marking parts of the molecule as positive or negative, without information on the relative intensity of a contribution. Relative intensities and contributions are, however, crucial to carefully tune the activity of molecules or analyze effects of modifications to the molecule. They try to fill this gap by proposing a novel visualization of the results using atom-based heatmaps which allow for quantitatively mapping the obtained explanation on a molecule.


Similarity plot of the whole fXa data set. The subset of the indole series is highlighted in orange stars. The similarity map was calculated with the UNITY fingerprint as implemented in D360 with a similarity threshold of 70%.


None of these approaches for interpreting DNN predictions were yet applied in a prospective drug design project to rationalize a structure–activity relationship (SAR). Any useful interpretation should first correctly predict activity trends of two related compounds (e.g., a matched-molecular pair, i.e., the exchange of one small substituent versus another one) and then clearly recognize this differing feature as important, based on a change in observed activity. In addition, the information regarding important structural features should be easily accessible and understandable to draw conclusions for novel chemical ideas to improve biological activity or other properties. While experienced medicinal and computational chemists often manually perform these analyses for small data sets or matched molecular pairs toward a rationale for the next optimization cycle, the sheer amount of biological and structural data often precludes this analysis for larger and structurally more diverse data sets.


Left: X-ray crystal structure of the indole-2-carboxamide 21 (green carbons, 3.0 Å resolution, in complex with human factor Xa. The protein binding site is displayed with gray carbon atoms. In addition, the protein binding is displayed as a solvent accessible surface with electrostatic potential mapped onto it (blue = positive, red = negative surface regions). Right: DeepSHAP fingerprint heatmap for compound 3 and color bar for the DNN models derived for the small fXa data set.


Therefore, the goal of their study is to explore different methods for explaining predictions of deep neural networks trained for the biological activity (protein–ligand binding affinity Ki and IC50) of two target proteins with pharmacological relevance, namely the serine protease factor Xa (fXa) and the aspartic protease renin. As the SAR for both targets is well-documented and several X-ray structures for protein–ligand complexes for both targets have been solved, this provides a good example to explore whether DNN-based statistical model interpretations are able to capture SAR information.


GradientHeatmaps generated from a GCNN model. From left to right: positive sum of contributions; negative sum of contributions; positive sum and negative sum mapped together; total sum of absolute values. It can be seen that basically no information can be extracted, since all atoms have both negative and positive features, and the variance does not correlate to structurally important regions of the molecule.


To apply these methods, they first trained predictive DNN models on medium-sized SAR data sets for factor Xa and renin and then implemented different methods for explaining neural network predictions. In the end, they present a method based on heatmaps that allows for quantitatively mapping a model explanation to a 2D depiction of the molecular structure and supporting the design of new analogues for the next round of synthesis based on the analysis of all experimental data. Particularly the use of heatmaps significantly facilitates the application of the methods in contrast to earlier work since not only information about singular, abstract features is provided but also a single explanation combining the extracted information on all features.


Interpretation of Structure–Activity Relationships in Real-World Drug Design Data Sets Using Explainable Artificial Intelligence Tobias Harren, Hans Matter, Gerhard Hessler, Matthias Rarey, and Christoph Grebner Journal of Chemical Information and Modeling Article ASAP DOI: 10.1021/acs.jcim.1c01263

bottom of page