Skip to main content
Advertisement
  • Loading metrics

A Coarse-Grained Elastic Network Atom Contact Model and Its Use in the Simulation of Protein Dynamics and the Prediction of the Effect of Mutations

Abstract

Normal mode analysis (NMA) methods are widely used to study dynamic aspects of protein structures. Two critical components of NMA methods are coarse-graining in the level of simplification used to represent protein structures and the choice of potential energy functional form. There is a trade-off between speed and accuracy in different choices. In one extreme one finds accurate but slow molecular-dynamics based methods with all-atom representations and detailed atom potentials. On the other extreme, fast elastic network model (ENM) methods with Cα−only representations and simplified potentials that based on geometry alone, thus oblivious to protein sequence. Here we present ENCoM, an Elastic Network Contact Model that employs a potential energy function that includes a pairwise atom-type non-bonded interaction term and thus makes it possible to consider the effect of the specific nature of amino-acids on dynamics within the context of NMA. ENCoM is as fast as existing ENM methods and outperforms such methods in the generation of conformational ensembles. Here we introduce a new application for NMA methods with the use of ENCoM in the prediction of the effect of mutations on protein stability. While existing methods are based on machine learning or enthalpic considerations, the use of ENCoM, based on vibrational normal modes, is based on entropic considerations. This represents a novel area of application for NMA methods and a novel approach for the prediction of the effect of mutations. We compare ENCoM to a large number of methods in terms of accuracy and self-consistency. We show that the accuracy of ENCoM is comparable to that of the best existing methods. We show that existing methods are biased towards the prediction of destabilizing mutations and that ENCoM is less biased at predicting stabilizing mutations.

Author Summary

Normal mode analysis (NMA) methods can be used to explore potential movements around an equilibrium conformation by mean of calculating the eigenvectors and eigenvalues associated to different normal modes. Each normal mode represents a global collective, correlated and complex, form of motion of the entire protein. Any conformation around equilibrium can be represented as a weighted combination of normal modes. Differences in the magnitudes of the set of eigenvalues between two structures can be used to calculate differences in entropy. We introduce ENCoM the first coarse-grained NMA method to consider atom-specific side-chain interactions and thus account for the effect of mutations on eigenvectors and eigenvalues. ENCoM performs better than existing NMA methods with respect to traditional applications of NMA methods but is the first to predict the effect of mutations on protein stability and function. Comparing ENCoM to a large set of dedicated methods for the prediction of the effect of mutations on protein stability shows that ENCoM performs better than existing methods particularly on stabilizing mutations. ENCoM is the first entropy-based method developed to predict the effect of mutations on protein stability.

Introduction

Biological macromolecules are dynamic objects. In the case of proteins, such movements form a continuum ranging from bond and angle vibrations, sub-rotameric and rotameric side-chain rearrangements [1], loop or domain movements through to folding. Such movements are closely related to function and play important roles in most processes such as enzyme catalysis [2], signal transduction [3] and molecular recognition [4] among others. While the number of proteins with known structure is vast with around 85K structures for over 35K protein chains (at 90% sequence identity) in the PDB database [5], our view of protein structure tends to be somewhat biased, even if unconsciously, towards considering such macromolecules as rigid objects. This is due in part to the static nature of images used in publications to guide our interpretations of how structural details influence protein function. However, the main reason is that most known structures were solved using X-ray crystallography [6] where dynamic proprieties are limited to b-factors and the observation of alternative locations. Despite this, it is common to analyze larger conformational changes using X-ray structures with the comparison of different crystal structures for the same protein obtained in different conditions or bound to different partners (protein, ligand, nucleic acid). It is particularly necessary to consider the potential effect of crystal packing [7], [8] when studying dynamic properties using X-ray structures.

Nuclear magnetic resonance (NMR) is a powerful technique that gives more direct information regarding protein dynamics [9], [10]. Different NMR methodologies probe distinct timescales covering 15 orders of magnitude from 10−12 s side chain rotations via nuclear spin relaxation to 103 s using real time NMR [10]. In practice, there is a limitation on the size of proteins that can be studied (between 50–100 kDa) although this boundary is being continuously pushed [11] providing at least partial dynamic information on extremely large systems [12]. However, only a small portion (around 10%) of the available proteins structures in the Protein Data Bank (PDB) are the result of NMR experiments [5].

Molecular dynamic simulations numerically solve the classical equations of motion for an ensemble of atoms whose interactions are modeled using empirical potential energy functions [13][15]. At each time step the positions and velocities of each atom are calculated based on their current position and velocity as a result of the forces exerted by the rest of the system. The first MD simulation of a protein (Bovine Pancreatic Trypsin Inhibitor, BPTI) ran for a total 8.8 ps [16] followed by slightly longer simulations (up to 56 ps) [17]. A film of the latter can be seen online (http://youtu.be/_hMa6G0ZoPQ). Despite using simplified potentials and structure representations (implicit hydrogen atoms) as well as ignoring the solvent, these first simulations showed large oscillations around the equilibrium structure, concerted loop motions and hydrogen bond fluctuations that correlate with experimental observations. Nowadays, the latest breakthroughs in molecular dynamics simulations deal with biological processes that take place over longer timescales. For example, protein folding [18], transmembrane receptor activation [19], [20] and ligand binding [21]. These simulations require substantial computer power or purpose built hardware such as Anton that pushes the current limit of MD simulations to the millisecond range [22]. Despite powerful freely available programs like NAMD [23] and GROMACS [24] and the raise of computational power over the last decade, longer simulations reaching timescales where most biological processes take place are still state-of-the-art.

Normal modes are long established in the analysis of the vibrational properties of molecules in physical chemistry [25]. Their application to the study of proteins dates back to just over 30 years [26][30]. These earlier Normal Mode Analysis (NMA) methods utilized either internal or Cartesian coordinates and complex potentials (at times the same ones used in MD). As with earlier MD methods their application was restricted to relatively small proteins. Size limitations notwithstanding, these early studies were sufficient to demonstrate the existence of modes representing concerted delocalized motions, showing a facet of protein dynamics that is difficult to access with MD methods. Some simplifications were later introduced and shown to have little effect on the slowest vibrational modes and their utility to predict certain molecular properties such as crystallographic b-factors. These simplifications included the use of a single-parameter potential [31], blocks of consecutive amino acids considered as units (nodes) [32] and the assumptions of isotropic [33] fluctuations in the Gaussian Network Model (GNM) or anisotropic fluctuations [34]. These approximations have drastically reduced the computational time required, thus permitting a much broader exploration of conformational space using conventional desktop computers in a matter of minutes. Of these, the most amply used method is the Anisotropic Network Model (ANM) [35], [36]. ANM is often referred simply as an elastic network model; one should however bear in mind that all normal mode analysis methods are examples of elastic network models. ANM uses a simple Hook potential that connects every node (a point mass defined at the position of an alpha carbon), within a predetermined cut-off distance (usually 18 Å). More recently, a simplified model, called Spring generalized Tensor Model (STeM), that uses a potential function with four terms (covalent bond stretching, angle bending, dihedral angle torsion and non-bonded interaction) has been proposed [37]. The normal mode analysis of a macromolecule produces a set of modes (eigenvectors and their respective eigenvalues) that represent possible movements. Any conformation of the macromolecule can in principle be reached from any other using a linear combination of amplitudes associated to eigenvectors.

It is essential however to not loose sight of the limitations of normal mode analysis methods. Namely, normal modes tell us absolutely nothing about the actual dynamics of a protein in the sense of the evolution in time of atomic coordinates. Plainly speaking, normal mode analysis is informative about the possible movements but not actual movements. Additionally, normal modes tell us of the possible movements around equilibrium. These two caveats clearly place normal mode analysis and molecular dynamics apart. First, molecular dynamics gives an actual dynamics (insofar as the potential is realistic and quantum effects can be ignored). Second, while the equilibrium state (or the starting conformation) affects the dynamics, one can explore biologically relevant timescales given sufficient computational resources to perform long simulations.

The vast majority of coarse-grained NMA models only use the geometry of the protein backbone (via Cα Cartesian position) disregarding the nature of the corresponding amino acid, in doing so a lot of information is lost. To our knowledge, there have been three independent attempts at expanding coarse-grained NMA models over the years to include extra information based on backbone and side-chain atoms. Micheletti et al. [38] developed the βGM model in which the protein is represented by Cβ atoms for all residues except Glycine in addition to the Cα atoms. The Hamiltonian is a function exclusively of Cα and Cβ distances. As a Gaussian model, βGM does not give information about directions of movement but only their magnitude and can be used solely to predict b-factors. As Cβ atoms do not change position, this model cannot be used by definition to predict the effect of mutations, either on dynamics or stability. The authors report results on b-factor prediction comparable to GNM. Lopéz-Blanco et al. [39] developed a NMA model in internal coordinates with three different levels of representation: 1. Heavy-atoms, 2. five pseudo-atoms (backbone: NH, Cα, CO and side-chain: Cβ and one at the center of mass of the remaining side chain atoms) and 3. Cα representation. While the potential is customizable, the default potential uses a force constant that is distance dependent but atom type independent. The method is validated through overlap analysis on a dataset of 23 cases. The authors report no significant differences in overlap for the different representations. Lastly, Kurkcuoglu et al. [40] developed a method that mixes different levels of coarse-graining. The authors test the method on a single protein, Triosephosphate Isomerase with higher atomic representation for a loop and achieve better overlap values compared to Cα only representation. All the methods above, while adding more detail to the representation utilize force constants that are not atom-type dependent. Therefore, while less coarse-grained, all the methods above are still atom-type and amino-acid type agnostic.

By definition, irrespective of the level of coarse-graining, such models cannot account for the effect of mutations on protein dynamics or stability. It has been shown that different amino acids interact differently and that single mutations can have a high impact on protein function and stability [41][43]. Mutations on non-catalytic residues that participate into concerted (correlated) movements have been shown to disrupt protein function in NMR relaxation experiments [44][46]. Several cases have been documented of mutations that don't affect the global fold of the protein, but affect protein dynamics and disrupt enzyme function [47].

To overcome this limitation of coarse-grained NMA methods while maintaining the advantages of simplified elastic network models, we developed a new mixed coarse-grained NMA model called Elastic Network Contact Model (ENCoM). ENCoM employs a potential function based on the four bodies potential of STeM with an addition to take in consideration the nature and the orientation of side chains. Side-chain atomic contacts are used to modulate the long range interaction term with a factor based on the surface area in contact [48] and the type of each atom in contact. Additionally, we introduce a non-specific version of ENCoM (ENCoMns) where all interactions between atom types are the same. ENCoM and ENCoMns were validated trough comparison to ANM, GNM and STeM with respect to the prediction of crystallographic b-factors and conformational changes, two properties conventionally used to test ENM methods. Moreover, we test the ability of ENCoM and ENCoMns to predict the effect of mutations with respect to protein stability and compare the ability of ENCoM and ENCoMns to a large number of existing methods specifically designed for the prediction of the effect of single point mutations on protein stability. Finally, we use ENCoM to predict the effect of mutations on protein function in the absence of any effects on protein stability.

Results

Correlation between experimental and predicted crystallographic b-factors

We utilized a dataset of 113 non-redundant high-resolution crystal structures [49] to predict b-factors using the calculated ENCoM eigenvectors and eigenvalues as described previously [35] (Equation 4). We compared the predicted b-factors using ENCoM, ENCoMns, ANM, STeM and GNM to the experimental Cα b-factors for the above dataset (Supplementary Table S1). For each protein we calculate the Pearson correlation between experimental and predicted values. The results in Figure 1 represent the bootstrapping average of 10000 iterations. We observe that while comparable, ENCoM (median = 0.54) and ENCoMns (median = 0.56) have lower median values than STeM (median = 0.60) and GNM (median = 0.59) but similar or higher than ANM (median = 0.54). It should be noted that it is possible to find specific parameter sets that maximize b-factor correlations beyond the values obtained with STeM and GNM (see methods). However we observe a trade-off between the prediction of b-factors on one side and overlap and the effect of mutations on the other (see methods). Ultimately we opted for a parameter set that maximizes overlap and the prediction of mutations with complete disregard to b-factor predictions. Nonetheless, as shown below, even the lower correlations obtained with ENCoM are sufficiently high to detect functionally relevant local variations in b-factors as a result of mutations. As GNM does not provide information on the direction of movements or the effect of mutations, it is not considered further in the present study.

thumbnail
Figure 1. Correlation between predicted and experimental b-factors for different ENM models.

Box plots represent the average correlations from a non-redundant dataset containing 113 proteins generated from 10000 resampling bootstrapping iterations. ANM and ENCoM have comparable correlations but lower than GNM and STeM.

https://doi.org/10.1371/journal.pcbi.1003569.g001

Exploration of conformational space

By definition, any conformation of a protein can be described as a linear combination of amplitudes associated to the eigenvectors representing normal modes. It should be stressed that such conformations are as precise as the choice of structure representation used and correct within the quadratic approximation of the potential around equilibrium. Those limitations notwithstanding, one application of NMA is to explore the conformational space of macromolecules using such linear combinations of amplitudes. Pairs of distinct protein conformations, often obtained by X-ray crystallography are used to assess the extent to which the eigenvectors calculated from a starting conformation could generate movements that could lead to conformational changes in the direction of a target conformation. Rather than an optimization to determine the amplitudes for a linear combination of eigenvectors, this is often simplified to the analysis of the overlap (Equation 5), i.e., the determination of the single largest contribution from a single eigenvector towards the target conformation. In a sense the overlap represents a lower bound on the ability to predict conformational changes without requiring the use of an optimization process.

The analysis of overlap for ANM, STeM, ENCoM and ENCoMns was performed using the Protein Structural Change Database (PSCDB) [50], which contains 839 pairs of protein structures undergoing conformational change upon ligand binding. The authors classify those changes into seven types: coupled domain motions (59 entries), independent domain motions (70 entries), coupled local motions (125 entries), independent local motions (135 entries), burying ligand motions (104 entries), no significant motion (311 entries) and other type of motions (35 entries). The independent movements are movements that don't affect the binding pocket, while dependent movements are necessary to accommodate ligands in the pose found in the bound (holo) form. Burying movements are associated with a significant change of the solvent accessible surfaces of the ligand, but with small structural changes (backbone RMSD variation lower than 1 Å). Despite differentiating between types of movements based on the ligands, the ligands were not used as part of the normal mode analysis. Since side-chain movements associated to the burying movements cannot be predicted with coarse-grained NMA methods, we restrict the analysis to domain and loop movements [51] as these involve backbone movements amenable to analysis using coarse grained NMA methods. For practical purposes, in order to simplify the calculations in this large-scale analysis, NMR structures were not considered. It is worth stressing however that all NMA methods presented here don't have any restriction with respect to the structure determination method and can also be used with modeled structures. A total of 736 conformational changes, half representing apo to holo changes and the other half holo to apo (in total 368 entries from PSCDB) are used in this study (Supplementary Table S2).

Overlap calculations were performed from the unbound (apo) form to the bound form (holo) and from the bound form to the unbound form. Bootstrapped results based onto the best overlap found within the first 10 slowest modes [52], [53] for the different types of conformational changes, domain or loop are shown in figures 2 and 3 respectively. In each case a set of box-plots represent the performance of the four methods being compared, namely STeM, ANM, ENCoM and ENCoMns. The left-most set of box-plots represents the average over all data while subsequent sets represent distinct subsets of the dataset as labeled. The first observation (comparing Figures 2 and 3) is that all tested NMA models show higher average overlaps for domain movements (Figure 2) than loop movements (Figure 3). This confirms earlier observations that NMA methods capture essential cooperative global (delocalized) movements associated with domain movements [51]. Loop movements on the other hand are likely to come about from a more fine tuned combination of normal mode amplitudes than what can be adequately described with a single eigenvector as measured by the overlap.

thumbnail
Figure 2. Prediction of domain motions.

The best overlap found within the 10 slowest internal motion modes for different NMA models on domain movements. Legend: All (all cases, N = 248), I (Independent movements, N = 130), C (Coupled movements, N = 117), A (apo form, N = 124) and H (holo form, N = 124). AI (apo form independent, N = 130), HI (holo independent, N = 130), AC (apo coupled, N = 116) and HC (holo coupled, N = 116). Box plots generated from 10000 resampling bootstrapping iterations. ENCoM/ENCoMns outperform ANM and STeM on all types of motion.

https://doi.org/10.1371/journal.pcbi.1003569.g002

thumbnail
Figure 3. Prediction of loop conformational change.

The best overlap found within the 10 slowest internal motion modes for different NMA models on loop movements. Acronyms are the same as in Figure 2. Number of cases: All (488), I (252), C (236), A (244) and H (244). AI (126), HI (126), AC (118) and HC (118). Box plots generated from 10000 resampling bootstrapping iterations. ENCoM/ENCoMns outperform ANM and STeM on all types of motion. The prediction of loop motions is much harder and here the difference between ENCoM and ENCoMns are more pronounced.

https://doi.org/10.1371/journal.pcbi.1003569.g003

The second observation is that STeM performs quite poorly compared to other methods irrespective of the type of movement (domain or loop). This is somewhat surprising when one compares with ENCoM or ENCoMns considering how similar the potentials are. This suggests that the modulation of interactions by the surface area in contact (the βij terms in Equation 1) of the corresponding side-chains as well as the specific parameters used are crucial.

Focusing for a moment on domain movements (Figure 2), ENCoM/ENCoMns outperform all other methods for domain movements in general as well as for every sub category of types of motions therein. Independent movements show lower overlaps than coupled ones, a fraction of those movements may not be biologically relevant due to crystal packing. Interestingly, while there are no differences between the overlap for independent movements starting from the apo or holo forms, this is not the case for coupled movements. In this case (right-most two sets in Figure 2), it is easier to use the apo (unbound) form to predict the holo (bound) form, suggesting that the lower packing in the apo form (as this are frequently more open) generates eigenvectors that favor a more comprehensive exploration of conformational space.

Lastly, with respect to loop movements (Figure 3), while it is more difficult to obtain good overlaps irrespective of the method or type of structure used, overall ENCoM/ENCoMns again outperforms ANM. Some of the same patterns observed for domain movements are repeated here. For example, the higher overlap for coupled apo versus holo movements.

We observe that ENCoMns consistently performs almost as well as ENCoM irrespective of the type of motion used (all sets in Figures 2 and 3). As side-chain conformations in crystal structures tent to minimize unfavourable interactions, the modulation of interactions by atom types that differentiate ENCoM from ENCoMns plays as minor but still positive role.

Prediction of the effect of mutations

Normal mode resonance frequencies (eigenvalues) are related to vibrational entropy [54], [55] (see methods). Therefore, it is reasonable to assume that the information contained in the eigenvectors can be used to infer differences in protein stability between two structures differing by a mutation under the assumption that the mutation does not drastically affect the equilibrium structure. A mutation may affect stability due to an increase in the entropy of the folded state by lowering its resonance frequencies, thus making more microstates accessible around the equilibrium.

We utilize experimental data from the ProTherm database [56] on the thermodynamic effect of mutations to validate the use of ENCoM to predict protein stability. Here we benefit from the manual curation efforts previously performed to generate a non-redundant subset of ProTherm comprising 303 mutations used for the validation of the PoPMuSiC-2.0 [57]. The dataset contains 45 stabilizing mutations (ΔΔG<−0.5 kcal/mol), 84 neutral mutations (ΔΔG [−0.5,0.5] kcal/mol) and 174 destabilizing mutations (ΔΔG>0.5 kcal/mol) (Supplementary Table S2). Each protein in the dataset have at least one structure in the PDB database [58]. As we calculate the eigenvectors in the mutated form we require model structures of the mutants. We generate such models using Modeller [59] and are thus assume that the mutation does not drastically affect the structure. Mutations were generated using the mutated.py script from the standard Modeller software distribution. Modeller utilizes a two-pass minimization. The first one optimizes only the mutated residue, with the rest of the protein fixed. The second pass optimizes the non-mutated neighboring atoms. It is important to stress that our goal is to model the mutated protein as accurately as possible and thus using any method that unrealistically holds the backbone fixed to model the mutant form would be an unnecessary simplification. We observe a backbone RMSD for the whole protein of 0.01+/−0.01 Å on average. Considering that RMSD is a global measure that could mask more drastic local backbone rearrangements, we also calculated the average maximum Cα displacement but with a value of 0.13+/−0.12 Å we are confident that while not fixed, backbone rearrangements are indeed minimal.

In the present work we predict the effect of mutations (Equation 6) for ENCoM, ENCoMns, ANM and STeM and compare the results to existing methods for the prediction of the effect of mutations using the PoPMuSiC-2.0 dataset above. We compare our results to those reported by Dehouck et al. [57] for different existing techniques: CUPSAT, a Boltzman mean-force potential [60]; DMutant, an atom-based distance potential [61]; PoPMuSiC-2.0, a neural network based method [57]; Eris, a force field based approach [62]; I-Mutant 2.0, a support vector machine method [63]; and AUTO-MUTE, a knowledge based four body potential [64]. We used the same dataset to generate the data for FoldX 3.0, an empirical full atom force-field [65] and Rosetta [66], based on the knowledge based Rosetta energy function. A negative control model was build with a randomized reshuffling of the experimental data. Figure 4 presents RMSE results for each model. The raw data for the 303 mutations is available in Supplementary Table S3.

thumbnail
Figure 4. Root Mean Square Error (RMSE) on the prediction of the effect of mutations.

RMSE of the linear regression through the origin between experimental and predicted variations in free energy variations (ΔΔG). Box plots generated from 10000 resampling bootstrapping iterations on the entire PoPMuSiC-2.0 dataset (N = 303). With the caveat that the dataset is biased towards destabilizing mutations, ENCoM/ENCoMns predict the effect of mutations with similar overall RMSE as most other methods.

https://doi.org/10.1371/journal.pcbi.1003569.g004

ANM and STeM are as good as the random model when considering all types of mutations together (Figure 4). This is not surprising as the potentials used ANM and STeM are exclusively geometry-based and are thus agnostic to sequence. ENCoMns, ENCoM, Eris, CUPSAT, DMutant, I-Mutant 2.0 and give similar results and predict significantly better than the random model. AUTO-MUTE, FoldX 3.0, Rosetta and in particular PoPMuSiC-2.0 outperform all of the other models.

The RMSE values for the subset of 174 destabilizing mutations (Figure 5) shows similar trends as the whole dataset with the exceptions of DMutant losing performance and PoPMuSiC-2.0 as well as AUTO-MUTE gaining performance compared to the others. It is important to stress that the low RMSE of PoPMuSiC-2.0 on the overall dataset is to a great extent due to its ability to predict destabilizing mutations.

thumbnail
Figure 5. Root Mean Square Error (RMSE) on the prediction of the effect of destabilizing mutations.

RMSE of the linear regression through the origin between experimental and predicted variations in free energy variations (ΔΔG). Box plots generated from 10000 resampling bootstrapping iterations on the subset of destabilizing mutations (experimental ΔΔG>0.5 kcal/mol) in the PoPMuSiC-2.0 dataset (N = 174). The results for destabilizing mutations mirror to a great extent those for the entire dataset given that they represent 57% of the entire dataset.

https://doi.org/10.1371/journal.pcbi.1003569.g005

The subset of 45 stabilizing mutations (Figure 6) gives completely different results as those obtained for destabilizing mutations. AUTO-MUTE, Rosetta, FoldX 3.0 and PoPMuSiC-2.0 that outperformed all of the models on the whole dataset or the destabilizing mutations dataset cannot predict better than the random model. This is also true for CUPSAT, I-Mutant 2.0 and Eris. ENCoM and DMutant are the only models with significantly better than random RMSE values for the prediction of stabilizing mutations.

thumbnail
Figure 6. Root Mean Square Error (RMSE) on the prediction of the effect of stabilizing mutations.

RMSE of the linear regression through the origin between experimental and predicted variations in free energy variations (ΔΔG). Box plots generated from 10000 resampling bootstrapping iterations on the subset of stabilizing mutations (experimental ΔΔG<−0.5 kcal/mol) in the PoPMuSiC-2.0 dataset (N = 45). With the exception of DMutant and ENCoM, most methods are not substantially better than random and some are substantially worst for stabilizing mutations.

https://doi.org/10.1371/journal.pcbi.1003569.g006

ANM and STeM outperform all models on the neutral mutations (Figure 7). All other models fail to predict neutral mutations any better than random. While the accuracy of ANM and STeM to predict neutral mutations may seem surprising at first, it is in fact an artifact of the methodology. As the wild type or mutated structures are assumed to maintain the same general backbone structure, the eigenvectors/eigenvalues calculated with ANM or STeM will always be extremely similar for wild type and mutant forms. Any differences will arise as a result of small variations in backbone conformation produced by Modeller. As such, ANM and STeM predict almost every mutation as neutral, explaining their high success in this case.

thumbnail
Figure 7. Root Mean Square Error (RMSE) on the prediction of the effect of neutral mutations.

RMSE of the linear regression through the origin between experimental and predicted variations in free energy variations (ΔΔG). Box plots generated from 10000 resampling bootstrapping iterations on the subset of neutral mutations (experimental ΔΔG = [−0.5,0.5] kcal/mol) in the PoPMuSiC-2.0 dataset (N = 84). Apart from ANM and STeM that predict most mutations as neutral as an artifact, no method is better than the reshuffled random model at predicting neutral mutations.

https://doi.org/10.1371/journal.pcbi.1003569.g007

At first glance, the comparison of ENCoMns and ENCoM could suggest that a large part of the effect observed come from a consideration of the total area in contact and not the specific types of amino acids in contact. However, the side chains in contact are already in conformations that minimize unfavourable contacts to the extent that is acceptable in reality (in the experimental structure) or as a result of the energy minimization performed by Modeller for the mutant form given the local environments. The fact that ENCoM is able to improve on ENCoMns is the actual surprising result and points to the existence of frustration in molecular interactions [67].

Considering that none of the existing models can reasonably predict neutral mutations, the only models that achieve a certain balance in predicting both destabilizing as well as stabilizing mutations better than random and with low bias are ENCoM and DMutant.

The analysis of the performance of ENCoM in the prediction of different types of mutations in terms of amino acid properties shows that mutations from small (ANDCGPSV) to big (others) residues are the most accurately predicted followed by mutations between non-polar or aromatic residues (ACGILMFPWV). ENCoM performs poorly on exposed residues (defined as having more than 30% of the surface area exposed to solvent) (Figure 8).

thumbnail
Figure 8. Average RMSE differences for different types of mutations, prediction methods and their combinations with ENCoM.

We calculate the RMSE difference (Eq. 8) for different subsets of the data (columns) and different methods (rows). Methods followed by a ‘+’ denote linear combinations of the named method with ENCoM. The heatmap values (shown within cells) denote the bootstrapped (10000 iterations) average RMSE difference with respect to random and are color coded according to the map on the upper left (lower values in red signify better predictions). For example the values in the leftmost column representing all data comes from the subtraction of the averages in the corresponding box plots in Figure 4 for the non-combined methods and the random model. Different subsets of types of mutations are show according to certain properties of the amino-acids or residues involved: buried (less than 30% solvent-exposed surface area) or exposed (otherwise), small (A,N,D,C, G, P,S, and V) or big (otherwise), polar (R,N,D,E,Q,H,K,S,T and Y) or non-polar including hydrophobic and aromatic (otherwise). Both methods and subsets of the data are clustered according to similarities in the RMSE difference profiles. Cases where the combination of a method with ENCoM is beneficial, i.e., the RMSE difference is lower than either method in isolation are denoted by a ‘*’ next to their values within the cell.

https://doi.org/10.1371/journal.pcbi.1003569.g008

Predictions based on linear combination of models

It may in principle be possible to find particular linear combinations of ENCoM and other methods that further improve predictions given the widely different (and potentially complementary) nature of the various approaches with respect to ENCoM. We performed linear regressions to find parameters involving ENCoM and each of the other methods in turn that maximize the RMSE difference between the combined models and random predictions (Eq. 8). When considering all types of mutations together, all mixed models perform better than either model individually (left-most column in Figure 8). By definition, a mixed model cannot perform worst than the better of the two models individually. The contribution of ENCoM to the improved performance of the combined model varies according to the model. The ratio of the relative contributions (in parenthesis), broadly classifies the methods into three categories: 1. Methods where ENCoM contributes highly, including I-Mutant (1.18±0.25), DMutant (1.07±0.31), CUPSAT (1.02±0.09) and Eris (1.02±0.11); 2. Methods where the contribution of ENCoM is smaller than that of the other method but still significant, including Rosetta (0.90±0.11) and FoldX3 (0.89±0.08); and finally methods where the addition of ENCoM have a small beneficial effect, including in this class Automute (0.69±0.13) and especially PoPMuSIC (0.18±0.10). The left-hand side dendrogram in Figure 8 clusters the methods according to their overall accuracy relative to random based on the entire profile of predictions (Eq. 8) for different subsets of the data (columns) according to the type of mutations being predicted. This clustering of methods shows that the relative position of the methods is maintained throughout except for a small rearrangement due to changes in the predictions for CUPSAT. This result suggests that the contribution from the combination of ENCoM to other methods is uniform irrespective of the type of mutations studied.

Self-consistency in the prediction of the effect of mutations

One basic requirement for a system that predicts the effect of mutations on stability is that it should be self-consistent, both unbiased and with small error with respect to the prediction of the forward or back mutations as reported by Thiltgen et al. [68]. The authors built a non-redundant set of 65 pairs of PDB structures containing single mutations (called form A and form B) and utilized different models to predict the effect of each mutation going from the form A to form B and back. From a thermodynamic point of view, the predicted variation in free energy variation should be of the same magnitude for the forward or back mutations, ΔΔGA→B = −ΔΔGB→A.

Using the Thiltgen dataset we performed a similar analysis for ENCoM, ENCoMns, ANM, STeM, CUPSAT, DMutant, PoPMuSiC-2.0 and a random model (Gaussian prediction with unitary standard deviation). For the remaining methods (Rosetta, Eris and I-Mutant) we utilize the data provided by Thiltgen. We removed three cases involving prolines as such cases produce backbone alterations. Furthermore, PoPMuSiC-2.0 failed to return results for five cases. The final dataset therefore contains 57 pairs (Supplementary Table S4). The CUPSAT and AUTO-MUTE servers failed to predict 25 and 32 cases respectively. As these failure rates are significant considering the size of the dataset, we prefer to not include these two methods in figures 9 and 10 (the remaining cases appear however in Supplementary Table S4).

thumbnail
Figure 9. Self-consistency bias.

The bias quantifies the tendency of a method to predict more accurately mutations in one direction than in the opposite. Machine learning based methods in particular show a high bias. ENCoM/ENCoMns have low bias.

https://doi.org/10.1371/journal.pcbi.1003569.g009

thumbnail
Figure 10. Self-consistency error.

The error calculated the magnitude of the biases in the prediction of forward and back mutations. Box plots were generated from 10000 resampling bootstrapping iterations for the 57 proteins pairs in the Thiltgen dataset. ENCoM/ENCoMns are the methods with lowest self-consistency errors.

https://doi.org/10.1371/journal.pcbi.1003569.g010

The results in figure 9 show that compared to the random model (a positive control in this case), Rosetta and FoldX 3.0 show moderate bias while PoPMuSiC-2.0 and I-Mutant show significant bias. All biased methods are biased toward the prediction of destabilizing mutations (data not shown) in agreement with the results in Figure 3. DMutant, Eris, ENCoM and ENCoMns are the only models with bias comparable to that of the random model (the positive control in this experiment).

ENCoM, ENCoMns, and to a lesser extent Rosetta and DMutant have lower errors than the random model (Figure 10). All other methods display an error equal or higher than that of the random model. ENCoM and ENCoMns vastly outperform all the others models in terms of error.

Lastly, STeM and ANM show low and moderate biases respectively and errors equivalent to random (data not shown) but as mentioned, these methods cannot be used for the prediction of mutations (other than neutral mutations as an artefact).

Prediction of NMR S2 order parameter differences

Mutations may not only affect protein stability but also protein function. While experimental data is less abundant, one protein in particular, dihydrofolate reductase (DHFR) from E. coli, has been widely used experimentally to understand this relationship [69], [70]. Recently, Boher et al. [47] have analyzed the effect of the G121V mutation on protein dynamics in DHFR by NMR spectroscopy. This mutation is located 15 Å away from the binding site but reduces enzyme catalysis by 200 fold with negligible effect on protein stability (0.70 kcal/mol). The authors evaluated, among many other parameters, the S2 parameter of the folic acid bound form for the wild type and mutated forms and identify the regions where the mutation affects flexibility. We calculated b-factor differences (Equation 4) between the folate-bound wild type (PDB ID 1RX7) and the G121V mutant (modeled with Modeller) forms of DHFR (Supplementary Table S5). We obtain a good agreement (Pearson correlation = 0.61) between our predicted b-factor difference and S2 differences (Figure 11). As mentioned earlier, the overall correlation of 0.54 in the prediction of b-factors (Figure 1) appears at least in this case to be sufficient to capture essential functional information.

thumbnail
Figure 11. Prediction of NMR S2 order parameter differences for the G121V mutation on DHFR from E. coli.

Experimental values (red, the bar represents the experimental error) are compared to the inverse normalized predicted b-factors differences, ΔB (Equation 4, blue line) showing a Pearson correlation coefficient of 0.6.

https://doi.org/10.1371/journal.pcbi.1003569.g011

Discussion

Our results show that a small modification of the long-range interaction term in the potential energy function of STeM had an important positive impact on the model. This small change improves the method in comparison to existing NMA methods in the traditional areas such as the prediction of b-factors and conformational sampling (overlap) where coarse-grained normal mode analysis are applied. More importantly however, it opens an entire new area of application to coarse-grained normal mode analysis methods. Specifically ENCoM is the first coarse-grained normal-mode analysis method that permits to take in consideration the specific sequence of the protein in addition to the geometry. This is introduced through a modification in the long-range interactions to account for types of atoms in contact modulated by their surface in contact. As a validation of the approach we explored the ability of the method to predict the effect of mutations in protein stability. In doing so we created the first entropy-based methodology to predict the effect of mutations on the thermodynamic stability of proteins. This methodology is entirely orthogonal to existing methods that are either machine learning or enthalpy based. Not only the approach is novel but also the method performs extremely favourably compared to other methods when viewed in terms of both error and bias.

As the approach taken in ENCoM is completely different from existing methods for the prediction of the effect of mutations on protein stability, a new opportunity arises to combine ENCoM with enthalpy and machine-learning methods. Unfortunately, we tried to create a naïve method based on linear combinations of the predictions of ENCoM and the different methods presented without success, perhaps due to the large bias characteristic to the different methods.

To assess the relative importance of contact area and the modulation of interactions with atom types, we tested a model that has non-specific atom-type interactions (ENCoMns), this model is atom type insensitive, but is sensitive the orientation of side-chain atoms. While a large fraction of the observed effect can be attributed to surfaces in contact only, ENCoM is consistently better than ENCoMns, particularly at predicting destabilizing mutations where the possibility to accommodate unfavourable interactions is more restricted. We cannot however exclude the effect of the intrinsic difficulty in modeling destabilizing mutations. For stabilizing mutations, the near equivalence of ENCoM and ENCoMns may be explained in part by the successful energy minimization of the mutated side-chain performed by Modeller. ANM and STeM failed to predict the effect of mutations on the whole dataset. They were not expected to perform well because their respective potentials only take in account the position of alpha carbons (backbone geometry). As such ANM and STeM tend to predict mutations as neutral, explaining their excellent performance onto the neutral subset and failure otherwise. Our results suggest that surfaces in contact are essential in a coarse-grained NMA model to predict the effect of mutation and that the specific interactions between atom types is necessary to get more subtle results, particularly stabilizing mutations.

ENCoM is consistently better than ENCoMns in the prediction of loop or domain movements irrespective of the dependency of the coupling of this movement to ligand binding or the starting structure (apo or holo form) and both outperform ANM and STeM. Our results corroborate previous work on a mix coarse-grained method adding a atomistic resolution to loops capable of improving the prediction of loop movements [40]. ENCoM performs considerably better than STeM throughout despite having very similar potentials, showing the importance of surfaces in contact in the prediction of movements. There is little difference between ENCoMns and ENCoM in the prediction of b-factors, but both perform worst than ANM, STeM and GNM. At least in the case of DHFR b-factor differences capture some essential characteristics of the system as calculated by NMR. However, one should be careful in placing too much emphasis on the validation of b-factor predictions using experimental data derived from crystals as these are affected to a great extent by rigid body motions within the crystal [71].

PoPMuSiC-2.0, AUTO-MUTE, FoldX 3.0 and Rosetta perform better than other models in the whole test dataset of mutations. However, the dataset consists of 15% stabilizing mutation, 57% of destabilizing and 28% of neutral mutations. When looking at each subset, machine learning or enthalpy based models failed to predict better than random on the stabilizing mutations subset. Biases in the dataset may have affected the training of machine-learning methods. For example the training set of PoPMuSiC-2.0 contains 2648 mutations in proportions that are similar to those in the testing set with 60%, 29% and 11% destabilizing, neutral and stabilizing mutations respectively. While it is true that most mutations tend to be destabilizing, if one is interested in detecting stabilizing mutations, a method over trained on destabilizing mutations will not meet expectations. Indeed, PoPMuSiC-2.0 and I-mutant the two machine learning based methods, have larger biases and errors than other methods in their predictions. Our method relies on a model structure of the mutant. As the modeling may fail to find the most stable side-chain conformation, it could have a bias toward giving slightly higher energies to the mutant. Notwithstanding this potential bias, ENCoM have the lowest error and bias. This may be a case where less is more as the coarse-grained nature of the method makes it also less sensitive to errors in modeling that may affect enthalpy-based methods to a greater extent. Finally, there is one more advantage in the approach taken in ENCoM. As the network model is a global connected model it considers indirectly the entire protein, while in existing enthalpy or machine-learning methods the effect of a mutation is calculated mostly from a local point of view.

The prediction of the thermodynamic effect of mutations is very important to understand disease-causing mutations as well as in protein engineering. With respect to human diseases, and particularly speaking of cancer mutations, one of the factors that may lead to tumour suppressor or oncogenic mutations is their effect on stability (the authors thank Gaddy Getz from the Broad Institute for first introducing us to this hypothesis). Specifically, destabilizing mutations in tumour suppressor genes or alternatively stabilizing mutations in oncogenes may be driver mutations in cancer. Therefore the prediction of stabilizing mutations may be very important to predict driver mutations in oncogenes. Likewise, in protein engineering, one major goal is that of improving protein stability with the prediction of stabilizing mutations. Such mutations may be useful not as the final goal (for purification or industrial purposes) but also to create a ‘stability buffer’ that permits the introduction of potentially destabilizing additional mutations that may be relevant to create the intended new function.

The work presented here is to our knowledge also the most extensive test of existing methods for the prediction of the effect of mutations in protein stability. The majority of methods tested in the present work fail to predict stabilizing mutations. However, we are aware that the random reshuffled model used may be too stringent given the excessive number of destabilizing mutations in the dataset. The only models that predict stabilizing as well as destabilizing mutations are ENCoM and DMutant, however ENCoM is the only method with low self-consistency bias and error.

While the contribution of side chain entropy to stability is well established [72], [73], here we use backbone normal modes to predict stability. As a consequence of the relationship between normal modes and entropy, our results attest to the importance of backbone entropy to stability and increase our understanding of the overall importance of entropy to stability. The strong trend observed on the behaviour of different parameters sets with respect to the α4 parameter is very interesting. Lower values are associated with better predictions of conformational changes while higher values are associated with better b-factor predictions. One way to rationalize this observation is to consider that higher α4 values lead to a rigidification of the structure, adding constraints and restricting overall motion. Likewise, lower α4 values remove constraints and thus lead to higher overlap.

We used ENCoM to predict the functional effects of the G121V mutant of the E. coli DHFR compared to NMR data. This position is part of a correlated network of residues that play a role in enzyme catalysis but with little effect on stability. The mutation affects this network by disrupting the movement of residues that are far from the binding site. We can predict the local changes in S2 with ENCoM. As these predictions are based on b-factor calculations, this result shows that at least in this case, even with b-factor prediction correlation lower than ANM, STeM and GNM we can detect functionally relevant variations. Clearly, despite the greater performance of GNM, ANM or STeM in the calculation of b-factors, these methods cannot predict b-factor differences as a consequence of mutations, as their predictions are the same for the two forms. While a more extensive study is necessary involving S2 NMR parameters, our results serve as an example against relying too heavily on crystallographic b-factors for the evaluation of normal mode analysis methods.

Methods

The fundamentals of Normal Mode Analysis (NMA) have been extensively reviewed [74], [75]. The key assumption in NMA is that the protein is in an equilibrium state around which fluctuations can be described using a quadratic approximation of the potential via a Taylor series approximation. In equilibrium the force constants are summarized in the Hessian matrix H that contains the elements of the partial second derivatives of the terms of the potential with respect to the coordinates. The potential used in ENCoM is similar to that of STeM, a Go-like potential where the closer a conformation is to the reference (in this case equilibrium) conformation , the lower the energy.(1)The principal difference between the potential above and that of STeM are the terms that modulate non-bonded interactions between amino acid pairs according to the surface area in contact. Specifically,(2)where represents a pairwise interaction energy between atom types and of atom and respectively of amino acids and containing and atoms each. Finally, represent the surface area in contact between atoms and calculated analytically [48]. We utilize the atom types classification of Sobolev et al. [76] containing 8 atom types. A matrix with all the interaction between atom types set at the value 1 is used in the non-specific ENCoMns model. In Figure 12 we illustrate with the concrete case of a set of 3 amino acids (D11, D45 and R141) in M. tuberculosis ribose-5-phosphate isomerase (PDB ID 2VVO), the differences between ENCoM, STeM and ANM in terms of spring strengths associated to different amino acids pairs. D45 is equally distant from R141 and D11 (around 6.0 Å) and interacts with R141 but does not with D11. Likewise, D11 does not interact with R141 with a Cα distance of 11.6 Å. ANM assigns equal strength to all three pairwise Cα springs (as their distances fall within the 18.0 Å threshold). STeM assigns equal spring strengths to the D11–D45 and D45–R141 pairs. Among the three methods, ENCoM is the only one to properly assign an extremely weak strength to the D11–R141 pair, a still weak but slightly stronger strength to the D11–D45 pair (due to their closer distance) and a very strong strength to the spring representing the D45–R141 interactions.

thumbnail
Figure 12. Illustration of the representation of inter-residue interactions by the different NMA methods.

The figure shows three amino acids (D11, D45 and R141) from the M. tuberculosis ribose-5-phospate isomerase (PDB ID = 2VVO). The distances between alpha carbons are shown with the yellow dotted lines and labeled in black. Interaction strengths relative to ANM (in red) are shown for STeM (green) and ENCoM (yellow). ANM treats all pairs as equal while STeM treat equally D11 and R141 with respect to D45 even though the interaction between D45 and R141 is much stronger by virtue of side-chain interactions, as correctly described in ENCoM.

https://doi.org/10.1371/journal.pcbi.1003569.g012

The hessian matrix can be decomposed into eigenvectors and their associated eigenvalues . For a system with N amino acids (each represented by one node in the elastic network), there are 3N eigenvectors. Each eigenvector describes a mode of vibration in the resonance frequency defined by the corresponding eigenvalue of all nodes, in other words the simultaneous movement in distinct individual directions for each node (Cα atoms in this case). The first 6 eigenvectors represent rigid body translations and rotations of the entire system. The remaining eigenvectors represent internal motions. The eigenvectors associated to lower eigenvalues (lower modes) represent more global or cooperative movements while the eigenvectors associated to higher eigenvalues (fastest modes) represent more local movements. Any conformation of the protein can be described by a linear combination of different amplitudes of eigenvectors :(3)The source code for ENCoM is freely available at http://bcb.med.usherbrooke.ca/encom.

In terms of running time, the computational cost of running ENCoM is only slightly higher than that of other methods representing the protein structure with one node per amino acid. The main bottleneck in terms of computational time is the diagonalization of the Hessian matrix. As this matrix is the same size for ENCoM, ANM, STeM and GNM by virtue of considering a single node per amino acid, all methods should in principle run equivalently. Differences occur due to pre-processing, in particular with ENCoM where this step is more involved due to the more detailed calculations involved in the measurement of surface areas in contact. Taking the dataset used for the prediction of b-factors as an example, we obtain an average running time of 23.8, 30.2 and 34 seconds on average for STeM, ANM and ENCoM respectively on an Intel Core i7 CPU Q 740 @ 1.73GHz laptop.

Parameterization of ENCoM

In order to obtain a set of parameters to be used with ENCoM we performed a sparse exhaustive integer search of the logarithm of parameters with for to maximize the prediction ability of the algorithm in terms of overlap and prediction of mutations. In other words, we searched all combinations of 13 distinct relative orders of magnitude for the set of 4 parameters . For each parameter set, we calculated the bootstrapped median RMSE (see below) Z-score sum for the prediction of stabilizing and destabilizing mutations, . Keeping in mind that lower RMSE values represent better predictions, the 2000 parameter sets (out of 28561 combinations) with highest were then used to calculate Z-scores for overlap in domain and loop movements. As our goal is to obtain a parameter set that combines low RMSE and high overlap, we ranked the 1000 parameter sets according to . The parameter set with highest is (solid black line in Figure 13). The optimization of the bootstrapped median is equivalent to a training procedure with leave-many-out testing. Given the dichotomy in predicting the effect of mutations and overlap on the one hand and b-factors on the other, we provide, the following is the best parameter set observed for the prediction of b-factors with average b-factor correlation of .

thumbnail
Figure 13. Performance of different parameter sets on the prediction of mutations, b-factors and motions.

We present as a parallel plot the bootstrapped median RMSE for stabilizing and destabilizing mutation, average best overlap for domains and loop movements as well as self-consistency bias and errors. In the right-most four columns with include the logarithm of the 4 alpha variables. Different parameter sets are colored based on b-factors correlations (red gradient) or domain movement overlaps (blue gradient). The black line represent the specific set of parameters used in ENCoM while the dashed line represents the values for ENCoM using the set of alpha parameters employed in STeM. There is a dichotomy in parameter space such that most sets of parameters are either good at predicting b-factors or overlap and mutations.

https://doi.org/10.1371/journal.pcbi.1003569.g013

The exploration of parameter space shows that there is a clear trade-off between the prediction of mutations (low RMSE), conformational sampling (high overlap) and b-factors (high correlations). Parameter sets that improve the prediction of b-factors are invariably associated with poor conformational changes (low overlap) associated to both domain and loop movements and variable RMSE for the prediction of mutations (red lines in Figure 13). On the other hand, parameter sets that predict poorly b-factors, perform better in the prediction of conformational changes and the effect of mutations (blue lines in Figure 13). The parameters used in STeM, are arbitrary, taken without modifications from a previous study focusing on folding [77]. As expected, this set of parameters can be considerably improved upon as can be observed in Figure 13 (dashed line).

The four right-most variables in the parallel coordinates plot in Figure 13 show the logarithm of the α parameters for each parameter set. Either class of parameter sets, better for b-factors (in red) or better for overlap/RMSE (in blue) come about from widely diverging values for each parameter across several orders of magnitude. There are however some patterns. Most notably for α4, where there is an almost perfect separation of parameter sets around α4 = 1. Interestingly, higher values of α1 and α2, associated with stronger constraints on distances and angles tend also to be associated to better overlap values. While it is likely that a better-performing set of parameters can be found, the wide variation of values across many orders of magnitude show that within certain limits, the method is robust with respect to the choice of parameters. This result justifies the sparse search employed.

Bootstrapping

Bootstrapping is a simple and general statistical technique to estimate standard errors, p-values, and other quantities associated with finite samples of unknown distributions. In particular, bootstrapping help mitigate the effect of outliers and offers better estimates in small samples. Bootstrapping is a process by which the replicates (here 10000 replicates) of the sample points are stochastically generated (with repetitions) and used to measure statistical quantities. In particular, bootstrapping allows the quantification of error of the mean [78][82]. Explained in simple terms, two extreme bootstrapping samples would be one in which the estimations of the real distribution of values is entirely made of replicates of the best case and another entirely of the worst case. Some more realistic combination of cases in fact better describes the real distribution. Thus, bootstrapping, while still affected by any biases present in the sample of cases, helps alleviate them to some extent.

Predicted b-factors

One of the most common types of experimental data used to validate normal mode models is the calculation of predicted b-factors and their correlation to experimentally determined b-factors. B-factors measure how much each atom oscillates around its equilibrium position [6]. Predicted b-factors are calculated as previously described [35]. Namely, for a given Cα node (i), one calculates the sum over all eigenvectors representing internal movements (n = 7 to 3N) of the sum of the squared ith component of each eigenvector in the spatial coordinates x,y and z normalized by the corresponding eigenvalues:(4)We calculate the Pearson correlation between predicted and experimental b-factors for each protein and average random samples according to the bootstrapping protocol described above.

Overlap

The overlap is a measure that quantifies the similarity between the direction of movements described by eigenvectors calculated from a starting structure and the variations in coordinates observed between that conformation and a target conformation [52], [53]. In other words, the goal of overlap is to quantify to what extent movements based on particular eigenvectors can describe another conformation. The overlap between the nth mode, , described by the eigenvector is given by(5)where represent the vector of displacements of coordinates between the starting and target conformations. The larger the overlap, the closer one can get to the target conformation from the starting one though the movements defined entirely and by the nth eigenvector. We calculate the best overlap among the first 10 slowest modes representing internal motions.

Evaluating the effect of mutations on dynamics

Insofar as the simplified elastic network model captures essential characteristics of the dynamics of proteins around their equilibrium structures, the eigenvalues obtained from the normal mode analysis can be directly used to define entropy differences around equilibrium. Following earlier work [54], [55], the vibrational entropy difference between two conformations in terms of their respective sets of eigenvalues is given by:(6)In the present work the enthalpic contributions to the free energy are completely ignored. Therefore, in the present work we directly compare experimental values of ΔG to predicted ΔS values. In order to use the same nomenclature as the existing published methods, we utilize ΔΔG to calculate the variation of free energy variation as a measure of conferred stability of a mutation.

Root mean square error

A linear regression going through the origin is build between predicted ΔΔG and experimental ΔΔG values to evaluate the prediction ability of the different models. The use of this type of regression is justified by the fact that a comparison of a protein to itself (in the absence of any mutation) should not have any impact on the energy of the model and the model should always predict an experimental variation of zero. However, a linear regression that is not going through the origin would predict a value different from zero equal to the intercept term. In other words, the effect of two consecutive mutations, going from the wild type to a mutated form back to the wild type form (WT→M→WT) would not end with the expected net null change.

The accuracy of the different methods was evaluated using a bootstrapped average root mean square error of a linear regression going through the origin between the predicted and experimental values. We refer to this as RMSE for short and use it to describe the strength of the relationship between experimental and predicted data.

Self-consistency bias and error in the prediction of the effect of mutations

If one was to plot the predicted energies variation of ΔΔGA→B versus ΔΔGB→A and trace a line y = −x, the bias would represent a tendency of a model to have points not equally distributed above or below that line while the error would represent how far away a point is from this line. In other words, considering a dataset of forward and back predictions, the error is a measure of how the predicted ΔΔG differ and the bias how skewed the predictions are towards the forward or back predictions [68]. A perfect model, both self-consistent and unbiased, would have all the points in the line. Statistically, the measures of bias and error are positively correlated. The higher the error for a particular method, the higher the chance of bias.

Linear combination of models

We determined the efficacy of linear combinations involving ENCoM and any of the other models for the prediction of the effect of mutations on thermostability as follows. For a given bootstrap sample of the data, we rescaled the predictions of each model as follows:(7)where the vector notation signifies all data points in the particular bootstrap sample and the index represents each model as well as the random model. We then use singular value decomposition to determine the best parameter the normalized predicted values to calculate the parameters that maximize the RMSE difference between the linear combination model and the random predictions as follows(8)whereand(9)The bootstrapped average is then calculated from the 10000 bootstrap iterations. The relative contribution of ENCoM and the model under consideration is given by the ratio of the parameters . It is interesting to note that this ratio could be seen as an effective temperature factor, particularly considering that predicted values are primarily enthalpic in nature for certain methods and entropy based in ENCoM.

Supporting Information

Table S1.

Raw data for the calculation b-factor correlations.

https://doi.org/10.1371/journal.pcbi.1003569.s001

(BZ2)

Table S2.

Raw overlap calculations for the different methods.

https://doi.org/10.1371/journal.pcbi.1003569.s002

(BZ2)

Table S3.

Experimental and predicted ΔΔG values on the effect of mutations.

https://doi.org/10.1371/journal.pcbi.1003569.s003

(BZ2)

Table S4.

Raw data for the calculation of self-consistency bias and error on the prediction of forward and back mutations.

https://doi.org/10.1371/journal.pcbi.1003569.s004

(BZ2)

Table S5.

Raw ΔB data and experimental S2 NMR order parameter for the G121V DHFR mutant.

https://doi.org/10.1371/journal.pcbi.1003569.s005

(BZ2)

Acknowledgments

The authors would like to thank Profs. Pierre Lavigne and Jean-Guy LeHoux from the department of Biochemistry, Université de Sherbrooke for useful discussions throughout the development of the method. RJN is part of Centre de Recherche Clinique Étienne-Le Bel, a member of the Institute of Pharmacology of Sherbrooke, PROTEO (the Québec network for research on protein function, structure and engineering) and GRASP (Groupe de Recherche Axé sur la Structure des Protéines).

Author Contributions

Conceived and designed the experiments: VF RJN. Performed the experiments: VF. Analyzed the data: VF RJN. Contributed reagents/materials/analysis tools: VF RJN. Wrote the paper: VF RJN.

References

  1. 1. Gaudreault F, Chartier M, Najmanovich R (2012) Side-chain rotamer changes upon ligand binding: Common, crucial, correlate with entropy and rearrange hydrogen bonding. Bioinformatics 28: i423–i430.
  2. 2. Wu Z, Xing J (2012) Functional roles of slow enzyme conformational changes in network dynamics. Biophys J 103: 1052–1059.
  3. 3. Vaidehi N (2010) Dynamics and flexibility of G-protein-coupled receptor conformations and their relevance to drug design. Drug Discov Today 15: 951–957.
  4. 4. Mittag T, Kay LE, Forman-Kay JD (2010) Protein dynamics and conformational disorder in molecular recognition. J Mol Recognit 23: 105–116.
  5. 5. Berman HM, Coimbatore Narayanan B, Di Costanzo L, Dutta S, Ghosh S, et al. (2013) Trendspotting in the Protein Data Bank. FEBS Lett 587: 1036–1045.
  6. 6. Rhodes G (2010) Crystallography Made Crystal Clear. Academic Press.
  7. 7. Eyal E, Gerzon S, Potapov V, Edelman M, Sobolev V (2005) The limit of accuracy of protein modeling: influence of crystal packing on protein structure. J Mol Biol 351: 431–442.
  8. 8. Garbuzynskiy SO, Melnik BS, Lobanov MY, Finkelstein AV, Galzitskaya OV (2005) Comparison of X-ray and NMR structures: is there a systematic difference in residue contacts between X-ray- and NMR-resolved protein structures? Proteins 60: 139–147.
  9. 9. Göbl C, Tjandra N (2012) Application of Solution NMR Spectroscopy to Study Protein Dynamics. Entropy 14: 581–598.
  10. 10. Kleckner IR, Foster MP (2011) An introduction to NMR-based approaches for measuring protein dynamics. Biochim Biophys Acta 1814: 942–968.
  11. 11. Foster MP, McElroy CA, Amero CD (2007) Solution NMR of large molecules and assemblies. Biochemistry 46: 331–340.
  12. 12. Ruschak AM, Kay LE (2012) Proteasome allostery as a population shift between interchanging conformers. Proc Natl Acad Sci U S A 109: E3454–E3462.
  13. 13. Karplus M, Kuriyan J (2005) Molecular dynamics and protein function. Proc Natl Acad Sci USA 102: 6679–6685.
  14. 14. Karplus M, McCammon JA (2002) Molecular dynamics simulations of biomolecules. Nat Struct Biol 9: 646–652.
  15. 15. Karplus M, Petsko GA (1990) Molecular dynamics simulations in biology. Nature 347: 631–639.
  16. 16. McCammon JA, Gelin BR, Karplus M (1977) Dynamics of folded proteins. Nature 267: 585–590.
  17. 17. Levitt M (1981) Molecular dynamics of hydrogen bonds in bovine pancreatic trypsin inhibitor protein. Nature 294: 379–380.
  18. 18. Piana S, Lindorff-Larsen K, Shaw DE (2012) Protein folding kinetics and thermodynamics from atomistic simulation. Proc Natl Acad Sci U S A 109: 17845–17850.
  19. 19. Cabana J, Holleran B, Beaulieu M-È, Leduc R, Escher E, et al. (2013) Critical hydrogen bond formation for activation of the angiotensin II type 1 receptor. Journal Of Biological Chemistry 288: 2593–2604.
  20. 20. Shan Y, Eastwood MP, Zhang X, Kim ET, Arkhipov A, et al. (2012) Oncogenic mutations counteract intrinsic disorder in the EGFR kinase and promote receptor dimerization. Cell 149: 860–870.
  21. 21. Shan Y, Kim ET, Eastwood MP, Dror RO, Seeliger MA, et al. (2011) How does a drug molecule find its target binding site? J Am Chem Soc 133: 9181–9183.
  22. 22. Shaw DE, Dror RO, Salmon JK, Grossman JP, Mackenzie KM, et al. (2009) Millisecond-scale molecular dynamics simulations on Anton. In: Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis; 14–20 November 2009, Portland, Oregon, United States. ACM SC '09. Available: http://dx.doi.org/10.1145/1654059.1654126
  23. 23. Wang Y, Harrison CB, Schulten K, McCammon JA (2011) Implementation of Accelerated Molecular Dynamics in NAMD. Comput Sci Discov 4: 015002.
  24. 24. Pronk S, Páll S, Schulz R, Larsson P, Bjelkmar P, et al. (2013) GROMACS 4.5: a high-throughput and highly parallel open source molecular simulation toolkit. Bioinformatics 29: 845–854.
  25. 25. Atkins P, de Paula J (2010) Physical Chemistry. 9 ed. W. H. Freeman.
  26. 26. Tasumi M, Takeuchi H, Ataka S, Dwivedi AM, Krimm S (1982) Normal vibrations of proteins: glucagon. Biopolymers 21: 711–714.
  27. 27. Go N, Noguti T, Nishikawa T (1983) Dynamics of a small globular protein in terms of low-frequency vibrational modes. Proc Natl Acad Sci USA 80: 3696–3700.
  28. 28. Brooks B, Karplus M (1983) Harmonic dynamics of proteins: normal modes and fluctuations in bovine pancreatic trypsin inhibitor. Proc Natl Acad Sci USA 80: 6571–6575.
  29. 29. Levitt M, Sander C, Stern PS (1985) Protein normal-mode dynamics: trypsin inhibitor, crambin, ribonuclease and lysozyme. J Mol Biol 181: 423–447.
  30. 30. Brooks B, Karplus M (1985) Normal modes for specific motions of macromolecules: application to the hinge-bending mode of lysozyme. Proc Natl Acad Sci USA 82: 4995–4999.
  31. 31. Tirion M (1996) Large Amplitude Elastic Motions in Proteins from a Single-Parameter, Atomic Analysis. Phys Rev Lett 77: 1905–1908.
  32. 32. Tama F, Gadea FX, Marques O, Sanejouand YH (2000) Building-block approach for determining low-frequency normal modes of macromolecules. PROTEINS: Structure, Function and Genetics 41: 1–7.
  33. 33. Bahar I, Atilgan AR, Erman B (1997) Direct evaluation of thermal fluctuations in proteins using a single-parameter harmonic potential. Fold Des 2: 173–181.
  34. 34. Doruker P, Atilgan AR, Bahar I (2000) Dynamics of proteins predicted by molecular dynamics simulations and analytical approaches: application to alpha-amylase inhibitor. PROTEINS: Structure, Function and Genetics 40: 512–524.
  35. 35. Atilgan AR, Durell SR, Jernigan RL, Demirel MC, Keskin O, et al. (2001) Anisotropy of fluctuation dynamics of proteins with an elastic network model. Biophys J 80: 505–515.
  36. 36. Doruker P, Jernigan RL, Bahar I (2002) Dynamics of large proteins through hierarchical levels of coarse-grained structures. J Comput Chem 23: 119–127.
  37. 37. Lin T-L, Song G (2010) Generalized spring tensor models for protein fluctuation dynamics and conformation changes. BMC Struct Biol 10 Suppl 1: S3.
  38. 38. Micheletti C, Carloni P, Maritan A (2004) Accurate and efficient description of protein vibrational dynamics: comparing molecular dynamics and Gaussian models. Proteins 55: 635–645.
  39. 39. Lopéz-Blanco JR, Garzón JI, Chacón P (2011) iMod: multipurpose normal mode analysis in internal coordinates. Bioinformatics 27: 2843–2850.
  40. 40. Kurkcuoglu O, Jernigan RL, Doruker P (2006) Loop motions of triosephosphate isomerase observed with elastic networks. Biochemistry 45: 1173–1182.
  41. 41. Wong KF, Selzer T, Benkovic SJ, Hammes-Schiffer S (2005) Impact of distal mutations on the network of coupled motions correlated to hydride transfer in dihydrofolate reductase. Proc Natl Acad Sci USA 102: 6807–6812.
  42. 42. Rod TH, Radkiewicz JL, Brooks CL (2003) Correlated motion and the effect of distal mutations in dihydrofolate reductase. Proc Natl Acad Sci USA 100: 6980–6985.
  43. 43. Kumar MDS, Bava KA, Gromiha MM, Prabakaran P, Kitajima K, et al. (2006) ProTherm and ProNIT: thermodynamic databases for proteins and protein-nucleic acid interactions. Nucleic Acids Res 34: D204–D206.
  44. 44. Henzler-Wildman K, Kern D (2007) Dynamic personalities of proteins. Nature 450: 964–972.
  45. 45. Doucet N (2011) Can enzyme engineering benefit from the modulation of protein motions? Lessons learned from NMR relaxation dispersion experiments. Protein Pept Lett 18: 336–343.
  46. 46. Gagné D, Charest L-A, Morin S, Kovrigin EL, Doucet N (2012) Conservation of flexible residue clusters among structural and functional enzyme homologues. Journal Of Biological Chemistry 287: 44289–44300.
  47. 47. Boehr DD, Schnell JR, McElheny D, Bae S-H, Duggan BM, et al. (2013) A Distal Mutation Perturbs Dynamic Amino Acid Networks in Dihydrofolate Reductase. Biochemistry 52: 4605–4619.
  48. 48. McConkey BJ, Sobolev V, Edelman M (2002) Quantification of protein surfaces, volumes and atom-atom contacts using a constrained Voronoi procedure. Bioinformatics 18: 1365–1373.
  49. 49. Kundu S, Melton JS, Sorensen DC, Phillips GN (2002) Dynamics of proteins in crystals: comparison of experiment with simple models. Biophys J 83: 723–732.
  50. 50. Amemiya T, Koike R, Kidera A, Ota M (2012) PSCDB: a database for protein structural change upon ligand binding. Nucleic Acids Res 40: D554–D558.
  51. 51. Hinsen K (1998) Analysis of domain motions by approximate normal mode calculations. PROTEINS: Structure, Function and Genetics 33: 417–429.
  52. 52. Marques O, Sanejouand YH (1995) Hinge-bending motion in citrate synthase arising from normal mode calculations. PROTEINS: Structure, Function and Genetics 23: 557–560.
  53. 53. Ma J, Karplus M (1997) Ligand-induced conformational changes in ras p21: a normal mode and energy minimization analysis. J Mol Biol 274: 114–131.
  54. 54. Karplus M, Kushick JN (1981) Method for estimating the configurational entropy of macromolecules. Macromolecules (14): 325–332.
  55. 55. McQuarrie DA (1976) Statistical Mechanics. University Science Books.
  56. 56. Gromiha MM, Sarai A (2010) Thermodynamic database for proteins: features and applications. Methods Mol Biol 609: 97–112.
  57. 57. Dehouck Y, Grosfils A, Folch B, Gilis D, Bogaerts P, et al. (2009) Fast and accurate predictions of protein stability changes upon mutations using statistical potentials and neural networks: PoPMuSiC-2.0. Bioinformatics 25: 2537–2543.
  58. 58. Berman H, Henrick K, Nakamura H, Markley JL (2007) The worldwide Protein Data Bank (wwPDB): ensuring a single, uniform archive of PDB data. Nucleic Acids Res 35: D301–D303.
  59. 59. Eswar N, Webb B, Marti-Renom MA, Madhusudhan MS, Eramian D, et al. (2006) Comparative protein structure modeling using Modeller. Current protocols in bioinformatics 15: 5.6.1–5.6.30.
  60. 60. Parthiban V, Gromiha MM, Schomburg D (2006) CUPSAT: prediction of protein stability upon point mutations. Nucleic Acids Res 34: W239–W242.
  61. 61. Zhou H, Zhou Y (2002) Distance-scaled, finite ideal-gas reference state improves structure-derived potentials of mean force for structure selection and stability prediction. Protein Sci 11: 2714–2726.
  62. 62. Yin S, Ding F, Dokholyan NV (2007) Eris: an automated estimator of protein stability. Nat Methods 4: 466–467.
  63. 63. Capriotti E, Fariselli P, Casadio R (2005) I-Mutant2.0: predicting stability changes upon mutation from the protein sequence or structure. Nucleic Acids Res 33: W306–W310.
  64. 64. Masso M, Vaisman II (2010) AUTO-MUTE: web-based tools for predicting stability changes in proteins due to single amino acid replacements. Protein Engineering Design & Selection 23: 683–687.
  65. 65. Schymkowitz J, Borg J, Stricher F, Nys R, Rousseau F, et al. (2005) The FoldX web server: an online force field. Nucleic Acids Res 33: W382–W388.
  66. 66. Leaver-Fay A, Tyka M, Lewis SM, Lange OF, Thompson J, et al. (2011) ROSETTA3: an object-oriented software suite for the simulation and design of macromolecules. Methods in Enzymology 487: 545–574.
  67. 67. Ferreiro DU, Komives EA, Wolynes PG (2013) Frustration in Biomolecules. arXiv 1312.0867v1.
  68. 68. Thiltgen G, Goldstein RA (2012) Assessing predictors of changes in protein stability upon mutation using self-consistency. PLoS ONE 7: e46084.
  69. 69. Gekko K, Yamagami K, Kunori Y, Ichihara S, Kodama M, et al. (1993) Effects of point mutation in a flexible loop on the stability and enzymatic function of Escherichia coli dihydrofolate reductase. J Biochem 113: 74–80.
  70. 70. Swanwick RS, Shrimpton PJ, Allemann RK (2004) Pivotal role of Gly 121 in dihydrofolate reductase from Escherichia coli: the altered structure of a mutant enzyme may form the basis of its diminished catalytic performance. Biochemistry 43: 4119–4127.
  71. 71. Soheilifard R, Makarov DE, Rodin GJ (2008) Critical evaluation of simple network models of protein dynamics and their comparison with crystallographic B-factors. Phys Biol 5: 026008.
  72. 72. Zhang J, Liu JS (2006) On side-chain conformational entropy of proteins. PLoS Comput Biol 2: e168.
  73. 73. Doig AJ, Sternberg MJ (1995) Side-chain conformational entropy in protein folding. Protein Sci 4: 2247–2251.
  74. 74. Cui Q, Bahar I (2006) Normal Mode Analysis. CRC Press.
  75. 75. Skjaerven L, Hollup SM, Reuter N (2009) Normal mode analysis for proteins. Journal of Molecular Structure: THEOCHEM 898: 42–48.
  76. 76. Sobolev V, Wade R, Vriend G, Edelman M (1996) Molecular docking using surface complementarity. PROTEINS: Structure, Function and Genetics 25: 120–129.
  77. 77. Clementi C, Nymeyer H, Onuchic JN (2000) Topological and energetic factors: what determines the structural details of the transition state ensemble and “en-route” intermediates for protein folding? An investigation for small globular proteins. J Mol Biol 298: 937–953.
  78. 78. Boos DD (2003) Introduction to the Bootstrap World. Statistical Science 18: 168–174.
  79. 79. Hinkley DV (1988) Bootstrap Methods. Journal of the Royal Statistical Society Series B (Methodological) 50: 321–337.
  80. 80. Stine R (1989) An Introduction to Bootstrap Methods Examples and Ideas. Sociological Methods & Research 18: 243–291.
  81. 81. Efron B, Tibshirani R (1995) Cross-validation and the bootstrap: Estimating the error rate of a prediction rule. Technical Report No. 577, National Science Foundation.
  82. 82. Efron B (1979) Bootstrap Methods: Another Look at the Jackknife. The Annals of Statistics 7: 1–26.