What’s great about Lead Finder?

We recently announced our collaboration with BioMolTech, a small modeling software company best known for their docking software, Lead Finder. Cresset has been traditionally focused on ligand-based design, but as we expand our capabilities into more structure-based methods we realized that we would have to supply a robust and accurate docking method to our customers. So, why did we choose Lead Finder?

docking_expt_setup

A graphical interface to Lead Finder will be included on our new structure-based design application.

The requirements for a great docking engine are simple to state: it needs to be fast and it needs to be accurate. The latter is by far the most important: nobody cares how quickly you got the answer if it is wrong! Our first question when evaluating docking methods was therefore to ask how good it was. This is actually a difficult question to ask, as there are several different definitions of ‘good’ depending on what you want: virtual screening enrichment? Good pose prediction? Accurate ranking of active molecules?

The first of these, virtual screening, is what most people think of when they think of docking success. Lead Finder has been validated on a wide variety of target classes and shows excellent enrichment rates (median ROC value across 34 protein targets was 0.94), even on targets traditionally seen as very hard such as PPAR-γ. The performance on kinases was uniformly excellent, with ROC values ranging from 0.86 for fibroblast growth factor receptor kinase (FGFR) to 0.96 for tyrosine kinase c-Src.

docked_syk_ligands

A series of SYK ligands docked to PDB 4yjq with crystal ligand shown in purple.

Pose prediction is of more interest to those working in the lead optimization phase, where assessing the likely bound conformation of a newly-proposed structure can be very helpful in guiding design. Here, too, Lead Finder performs well. On the widely-used Astex Diverse Set, used to test docking performance, Lead Finder produces the correct pose as the top-scoring result 82% of the time, which is comparable to other state-of-the-art methods (Gold, for example, gets 81% on the same measure). On a number of literature data sets testing self-docking performance Lead Finder finds the correct pose between 81 and 96% of the time, which is excellent.

leadfinder_command_lines

Lead Finder includes dedicated modes for extra-precision and virtual screening experiments.

One of the most intriguing things about Lead Finder is the makeup of the scoring functions. In contrast to many other scoring functions which use heuristic or knowledge-based potentials, the Lead Finder scoring functions comprise a set of physics-based potentials describing electrostatics, hydrogen bonding, desolvation energy, entropic losses on binding and so on. Different scoring functions can be obtained by weighting these contributions differently: BioMolTech have found that the optimal weights for pose prediction differ slightly from those for energy prediction, for example. A separate scoring function has been developed which aims to compute a ΔG of binding given a correct pose. This is a difficult task, and the success of the Lead Finder function was demonstrated in the in the 2010 CSAR blind challenge, where the binding energy of 343 protein-ligand complexes had to be predicted ab initio. Lead Finder was the best-performing docking method in that challenge. BioMolTech are actively building on this excellent result with the aim of making robust and reliable activity predictions a standard outcome of a Lead Finder experiment.

Cresset are proud to be the worldwide distributors for Lead Finder. It is available today as a command-line application and will be built into Cresset’s upcoming structure-based drug design workbench.

Request an evaluation of Lead Finder.

Understanding torsions in Spark 10.4

Spark is the most advanced and complex bioisostere finding tool available today. While it excels at finding bioisosteric replacements that match the steric, pharmacophoric and electronic nature of the original molecule, we realize that the output of a Spark search is only the beginning. Spark suggests new molecules, so someone needs to make them. Given the time and effort that goes into even a simple synthesis, we need to make sure that Spark’s suggestions are as robust as they can be to maximize the changes of success for each Spark result.

Spark’s internal algorithms do a lot of complicated work to ensure that the result molecules are chemically sensible. When you are stitching a fragment into a new molecule there are many subtle effects that you have to take account of:

  • Does the hybridization state of any of the atoms change (especially important for nitrogens, which may shift from pyramidal to planar)?
  • Does the pKa of any ionizable groups change, and if so do we need to re-assign protonation states?
  • How do the electrostatic properties of the fragment change once it is stitched into the rest of the molecule (and vice versa)?
  • If any newly-created bonds are free to rotate, what is the rotamer that maximizes the similarity of the new product molecule to the starting point?
  • Is this conformation energetically sensible?

On this last point, Spark carries out a rotation around the newly-formed bond in order to estimate the amount of strain energy carried by that bond in the assigned torsion angle. For speed, this rotational scan is performed holding the parts of the molecule on each side of the bond rigid, and as a result the computed strain energy is only an estimate. However, even this estimated strain energy can be very useful to flag up cases where the new molecule is in an energetically unfavorable conformation and hence would need further investigation before it could be recommended for synthesis.

While this purely computational procedure works well, it would be nice to bring in some more empirical knowledge gleaned from the vast numbers of small molecule crystal structures that are available in the Cambridge Structural Database (CSD). Luckily, this analysis has already been done by Prof. Rarey’s group at the University of Hamburg, resulting in a pair of papers detailing a hierarchical set of rules to determine preferred values for torsion angles in small drug-like molecules.1, 2 This rule set, called the Torsion Library, has been incorporated into the most recent release of Spark (version 10.4).

Whenever a new product molecule is formed, Spark applies the Torsion Library rules to the newly-formed bonds, and highlights cases where the torsion angle is not one that is frequently observed in the CSD. This doesn’t automatically exclude that result from consideration (there may be a reason such as steric clashes for the uncommon torsion), but it does flag that result molecule as needing careful inspection. The Torsion Library results are automatically displayed for all results and can be used in Spark’s filters like any other result column.

As a quick example of the usefulness of the Torsion Library, we have re-examined the 2011 case study FieldStere V3.0 Example 2: Fragment Growing. In this study we were performing a fragment growing experiment in p38, using an existing ligand to guide the growth of a fragment towards the hinge binding region (Figure 1). If we had particular chemistries in mind, then instead of searching the general reagent databases, we could search specific reagent databases to find what commercially-available reagents we could use.

Fragment growing Spark experiment
Figure 1: Fragment growing Spark experiment.

In this case, we assume that we could grow the fragment either through addition of a thiol (leading to a sulfur linker) or via addition of an amine (leading to a nitrogen linker). Both searches lead to a variety of interesting-looking results which look potentially active. However, analysis of the Torsion Library result reveals significant differences. Looking at the amine linker results (Figure 2), 493 of the top 500 results have only a ‘Medium’ Torsion Library score, indicating that to get the amino substituents to reach to the hinge binding region you have to twist the amine into a moderately unfavorable conformation.

Amine linkers
Figure 2: Amine linkers.

However, when we look at the thioether result set (Figure 3), the majority (327/500) of the results are in the ‘High’ frequency category.

Thioether linkers
Figure 3: Thioether linkers.

Of course, it’s possible that good highly-active compounds could be obtained from either linking chemistry. However, the Torsion Library results clearly indicate that a thioether linkage is preferred here, purely because it orients the added fragments towards the hinge better. This is valuable knowledge if we were planning a small combinatorial library on this fragment expansion.

Adding the Torsion Library to Spark makes the results even more robust and useful, allowing you to see at a glance what the known experimental conformational preferences of small molecules say about the conformer quality in your Spark results. The new feature is available now as part of the Spark 10.4 releasetry a free evaluation today.

1 Shärfer  et al., J. Med. Chem. 2013, 56(5), 2016

2 Guba et al., JCIM 2016, 56, 1

What rings do medicinal chemists use, and why?

The vast majority of small molecule drugs contain at least one ring. The rigidity, synthetic accessibility and geometric preferences of rings mean that medicinal chemistry series are usually defined in terms of which ring or rings they have at their core. However, ring systems are more than just scaffolds waiting to be elaborated: the electrostatic and pharmacophoric properties of ring systems are usually crucial to the biological activity of the molecules that contain them.

We have conducted an investigation into the most common ring system and substitution patterns in the recent medicinal chemistry literature, as derived from the ChEMBL database. For each of these rings, the electrostatic potential has been calculated allowing the chemist to see at a glance the electronic properties of each system. In addition, applying the Spark bioisostere evaluation metric to the rings database reveals the best bioisosteric replacements for each ring system.

In the poster, What rings do medicinal chemists use, and why?, selected entries from the rings database are shown and discussed. The full data set is an invaluable aid to the medicinal chemist looking to understand the properties of their lead molecule and the opportunities for variation of its core.

April 2016 release of new Spark databases

The new release of Spark comes with new and updated fragment and reagent databases. These are designed to give you the widest sources of inspiration for your projects, whilst also enabling a close link between Spark’s suggestions and the chemistry that is available to you.

Fragment Databases

The latest Spark databases include over 3.5 Million fragments that are used to find novel bioisosteres for your project. These come from two distinct sources – ZINC and ChEMBL. In each case the molecules from the entire source collection are fragmented and the frequency with which any fragment has appeared is noted. We then sort the fragments according to frequency and label them according to the number of bonds that were broken to disconnect the fragment from its original molecule.

Spark_db_Apr16_frag_connection
Figure 1: Count of fragments in Spark databases from ZINC and ChEMBL split by the number of connection points of each fragment.

Analysis of the numbers of fragments in common between the ZINC and ChEMBL databases shows surprising complementarity.

Number of fragments
(to nearest 1000)
Very Common Common Less Common Rare Very Rare Singleton
ChEMBL21 common 41,000 41,000 43,000 26,000 18,000 15,000
ChEMBL21 rare 7,000 24,000 44,000 46,000 45,000 34,000
ChEMBL21 very rare 3,000 11,000 26,000 34,000 42,000 70,000

 
Unsurprisingly, there is significant overlap in the most common fragments from each database. However, once you get to the rarer fragments it is apparent that ZINC and ChEMBL occupy quite distinct parts of chemical space, with the majority of “rare” fragments being unique to each database.

Reagent databases

In this release we have completely replaced the source of our reagent fragments. We are delighted to be working with eMolecules to provide you with over 500,000 reagents that are easy to order with known availability. The new eMolecules based reagent databases use an enhanced set of rules to more closely relate the Spark results to the chemistry that you want to use on your molecules.


Spark_db_Apr16_reagent_MW
Figure 2: Analysis of Spark reagent databases split by molecular weight.

Each fragment in the eMolecules database is linked back to both the eMolecules ID for the source reagent and its availability. Running a Spark search on these databases thus allows you to very simply move from the Spark experiment to ordering the reagents you require to turn your Spark results into reality.

Spark_reagent_availability
Figure 3: Spark reagent results include availability information from eMolecules.

Conclusion

This release of databases for Spark increase the number of fragments and the improve the availability of reagents. When combined with the existing VEHICLe derived database, the CSD derived database and databases from your corporate collections generated with Spark’s database generator we believe that Spark will find a even better range bioisosteres for your project.

To update to the latest databases or to take a look at how Spark can impact your project please contact us.

Pros and cons of alignment-independent descriptors

Working with molecules in 3D is computationally expensive compared to most 2D methods. Most modern cheminformatics toolkits can do hundreds of thousands to millions of 2D fingerprint comparisons per second, with 3D similarity techniques being multiple orders of magnitude slower.

The computationally-expensive part of the 3D calculations usually involves aligning conformations to each other. The natural tendency therefore, is to see if we can skip this step and compute a set of properties that can tell us if two molecules are similar in 3D without actually having to align them. If this works we can get the best of both worlds: the speed of 2D comparisons combined with the accuracy and structural independence of 3D similarity functions.

Pharmacophoric descriptors

The earliest version of this idea is the simple pharmacophore. All you have to do is assign a few pharmacophoric points to each molecule (usually based on some sort of functional group pattern recognition), then generate a set of descriptors based on sets of these (usually 2 or 3). If two molecules share one or more pharmacophore descriptor, then they match.

Pharmacophore searches succeed on some counts: they are indeed very fast, and they do encode some 3D information. However, they involve a very crude binning of the wide array of possible intermolecular interactions into a few pharmacophore types, and they describe shape poorly giving them very limited predictive power.

If pharmacophores can’t describe shape well, are there other techniques that can? A number of different methods have been presented, such as those based on multipole moments or spherical harmonic coefficients (e.g. ParaSurf/ParaFit), as well as methods based on statistical moments such as Ultrafast Shape Recognition (USR). None of these has achieved widespread use: harmonic coefficients are not rotation-invariant, while the USR technique correlates poorly to more accurate measures of shape similarity. 1

Field descriptor distances

It would be nice if there was a way of providing alignment-independent descriptors which described both electrostatics and shape/pharmacophoric properties with a reasonable degree of accuracy. This is actually one of the first things we did when we were looking into starting Cresset – we developed a method called FieldPrint that encodes the distance matrix of field descriptors down into a fingerprint that can be used for alignment-independent similarity calculations. The concept is similar to that of GRIND 2 which was published around the same time, although the algorithmic details are somewhat different.

We put a lot of work into these techniques, but were never able to get a method that we were completely satisfied with. The problem we found is that encoding the distances in pairs/triplets of field descriptors ends up losing too much 3D information, and as a result you either end up with a slower mimic of standard 2D fingerprints, or you end up with a large false positive count. The FieldPrints have a tendency to find molecules with a similar overall pattern of positive/negative field, but can compute a very high similarity for molecules that are in reality quite dissimilar in terms of the 3D spatial arrangement of those fields. My belief now is that this is an inherent flaw of alignment-independent descriptors: they either have to be sufficiently complex that you are in effect computing an alignment, or you lose too much information and are not significantly better off than just using old-fashioned structural/pharmacophoric fingerprints.

As you move from full 3D interaction potentials to 2D correlograms to 1D fingerprints comparisons get faster but you lose information
Figure 1: As you move from full 3D interaction potentials to 2D correlograms to 1D fingerprints, comparisons get faster but you lose information

Handling conformation

A further consideration is how you handle conformation. The original GRIND papers just use a single conformer per molecule, and their validation was confined to series of rigid molecules or sets of molecules where single conformations were generated and manually adjusted to be similar. In the general case neither of these shortcuts will work. Any method that purports to be 3D but starts with a single conformation per molecule is inherently flawed: the whole point of 3D is that molecules are flexible.

There is a disturbing number of papers out there that do some sort of notionally-3D analysis on set of single CORINA-derived conformations. You can get very good enrichment factors on retrospective virtual screens doing this, but in practise the enrichments are largely bogus. CORINA is deterministic, and as a result molecules with similar structures will tend to be put into similar conformations. Combine this with the fact that many standard retrospective VS data sets have very low structural diversity, and the problem becomes apparent. The query molecule and its dozens of congeners in the “actives” data set are all placed in the same single conformation, and so application of a 3D or pseudo-3D technique can easily produce excellent-looking enrichment statistics. However, the enrichment all comes from a hidden 2D similarity.

So, single-conformation methods are a dead end and we need to consider flexibility. Once you are doing so, you need to factor in both the conformer generation time as part of the build time for the descriptor, and also factor in that your comparison speeds will now be two to four orders of magnitude slower than 2D fingerprints (assuming 100 conformers per molecule, and depending on whether you know a single bioactive conformation for one of the two molecules being compared or whether you need to compare conformer populations). 2D methods thus have an unassailable speed advantage, which is part of the reason they remain so popular.

Using FieldPrint as a filter

Our original vision for Blaze (or FieldScreen, as it was then) was that it would rely on the FieldPrints to give extremely rapid searching. You can get quite good enrichment factors from the FieldPrints in retrospective virtual screens, but when we investigated further this is largely because they act as a proxy for overall molecular size and charge. Once you control for that by more careful selection of decoys the FieldPrint performance is much less good. Analysing a molecular similarity technique through retrospective virtual screening performance is very very hard to do well, and as a result I am intrinsically wary of methods that present a set of DUD enrichments as their sole validation: FieldPrints perform quite well on DUD, but we know that they are not particularly effective in real prospective applications.

We still use the FieldPrint technology: it’s the first search stage in every Blaze run. It’s generally good enough to filter out 25-50% of decoy molecules that have no similarity to the query, but certainly not good enough to use the FieldPrint ranks directly. This is why we just use them as a pre-filter: molecules that pass that filter have much more accurate similarities computed using our alignment-based clique/simplex algorithms.

In the end, there’s no real short cut. All attempts to date to make 3D comparisons faster by simplifying descriptors and skipping the expensive alignment step just seem to leave out too much information – such techniques can be useful for cutting down the search space but if you’re going to spend CPU cycles working in 3D you might as well do it properly!

1. T. Zhou et al. / Journal of Molecular Graphics and Modelling 29 (2010) 443–449
2. Pastor, M.; Cruciani, G.; McLay, I.; Pickett, S.; Clementi, S. J. Med. Chem. 2000, 43 (17), 3233–3243.

Download an evaluation

Try our software for yourself – download a free evaluation.

Affordable virtual screening with Blaze: Benchmarks

Introduction

We released BlazeGPU a couple of years ago, allowing the full power of the Blaze virtual screening system to be used on a few consumer graphics cards rather than a full-scale Linux cluster. Since then, graphics cards and CPUs have only got faster, so we decided that it was time to update our benchmarks and see how well all of the new hardware performs.

For these benchmarks we took a random subset of 4,000 molecules from our in-house Blaze data set and searched with a medium-sized query molecule. The molecules in the data set average 80 conformers each. We’ve run with three different search conditions: the full slow-but-accurate simplex algorithm, the standard clique algorithm and the new fastclique algorithm. All of these were run with 50% fields and 50% shape.

CPU performance

Firstly, the CPU benchmarks. All of these are single-core performance, but with all cores loaded so that we’re not benefitting from Intel Turbo Boost. In most cases Blaze will be saturating all cores, so this is representative of real-world performance. Note that the vertical axis is on a log scale.

CPU benchmarks

As can be seen, there’s a significant performance difference between the older CPUs at Cresset (such as the Q6600) and the newer Ivy Bridge i7-3770K chips, but not nearly as much as you would expect given that the Q6600s are around 7-8 years old at this point. The significant speed improvements of the fastclique algorithm are clearly visible with the throughput being more than 4x greater than the original clique algorithm. The last set of columns on the graph are from an Amazon c4.xlarge instance and show that the performance of each core on those systems is roughly the same as the Sandy Bridge i3-2120.

GPU Performance

Moving on to the GPUs, we’ve tested the throughput on a variety of different systems. Firstly, we’ve tested a variety of GTX580s on different motherboards and processors. As you would expect, for the most part the performance is governed by the GPU, but the exception is the fifth test system which is noticeably slower than the others. That card is sitting in a much older chassis with an older motherboard and hence is probably suffering from lack of backplane bandwidth to the GPU.

GPU benchmarks

The newer GTX960s perform extremely well on the Blaze calculations. We weren’t sure if they would, after the disappointment of the GTX680 which was noticeably slower than the 580 (data not shown). The difference is noticeable in the clique stages, but really stands out in the simplex calculations where a GTX960 is 50% faster than the GTX580s. By contrast, the high-end Tesla hardware is not a great performer on the Blaze OpenCL kernels. By all accounts the Tesla hardware is significantly faster than the consumer hardware on double precision workloads, but the Blaze code is all single precision and in that realm the cheap consumer hardware has an unbeatable price/performance advantage.
Finally, the GRID K520 is the hardware found on the Amazon g2.2xlarge and g2.8xlarge instances. As can be seen, it’s not a brilliant performer on the Blaze workload, being around the same speed as the Tesla on the fastclique algorithm but noticeably slower than all of the other cards tested on the simplex workload. However, it provides a nice test of GPU scaling: when running on a 4 times larger data set on all 4 GPUs of a g2.8xlarge instance, we observed substantially the same throughput as running the original data set on a single K520 GPU, showing that we can parallelise across multi-GPU systems with no loss of performance.

Cost efficiency on Amazon

Converting the throughput shown above, we can look at the cost of screening on the Amazon cluster with Blaze. The raw cost to screen a million molecules is shown in the table. Note that the actual costs will be somewhat higher, due to job overheads and data transfer costs.

Cost efficiency on Amazon

The Amazon GPU solutions are noticeably cheaper for fastclique jobs, roughly cost-competitive for the clique runs, but the poor performance of the K520 on the simplex task means that it is significantly more expensive there. As a result, at the moment there’s no real impetus to use the Amazon GPU resources unless you can get them significantly more discounted than the CPU instances on the spot market.

Conclusion

New hardware is significantly faster at running Blaze than old stock as would be expected. However, the speed increases are much lower than they have been in the past, with CPUs that are well past their best still performing adequately. On the GPU side, Blaze performs particularly well on commodity graphics cards leaving few reasons for us to invest in dedicated GPU co-processing cards.

The cost of running a million molecule virtual screen on the Amazon cloud has never been cheaper. If tiered processing is used as is the default for Blaze then these screens can be performed for a very low cost indeed – less than $15 per million molecules for the processing costs.

Contact us for a free evaluation to try Blaze on your own cluster, or Blaze Cloud.

Virtual screening – how many conformations is enough?

One of the things that distinguishes 3D virtual screening methods (such as Cresset’s Blaze) from 2D methods such as fingerprints is that you have to start worrying about conformations. A molecule only has one ECFP4 fingerprint, but if it is flexible then it has lots of shapes and pharmacophores. How to deal with this is one of the fundamental questions in ligand-based virtual screening (LBVS).

A few methods such as some docking tools explicitly search the conformation space as part of the scoring algorithm, but most ligand-based methods rely on pre-enumeration of a set of conformations of the molecules to search. A few notionally 3D methods avoid this by using only a single conformation of each compound, usually that chosen by some 3D coordinate generation program such as Corina. I would argue that these methods are actually 2D methods in disguise, as the ‘3D’ properties they calculate are determined by the combination of the topology and the rules in the Corina database.

Assuming that we are properly considering conformations, the big question that required answering is: how many? The naïve answer is ‘all of them’, but that turns out not to be that useful for three reasons. Firstly, if you define ‘conformer space’ as the set of structures sitting in potential energy local minima, then the space depends quite strongly on the force field and solvation model. Secondly, although this definition works for small and simple molecules, once you move to large and more flexible molecules it becomes less useful: rotating a central bond in the molecule by a few degrees may have virtually no impact on the conformational energy but may move the ends by enough to give a significantly different pharmacophore. The conformation space you need to consider thus has to be wider than just the potential energy minima. Thirdly, there are just too many: the number of conformations goes up exponentially with the number of rotatable bonds.

As a result, most LBVS systems perform a limited sampling of conformation space, generally by capping the number of conformations that are considered for a molecule. As the computational cost increases with the cap, we now need to decide what that should be. A recent Schrödinger paper by Cappel et al. found that this limit is actually very low: although a full exploration of conformation space is necessary to get good results in 3D-QSAR, you only need a fast and limited conformational search to get good LBVS performance.

Interesting – but is it true?

In the past we have done several analyses of Blaze performance vs. the number of conformations, as a result of which we raised the recommended maximum number of conformations from 50 to 100 quite a few years back. The Schrödinger paper seems to contradict this recommendation, and so far we’ve never investigated Blaze performance with really small numbers of conformations, so it was time to have another look.

To do this, we grabbed a few of the DUD data sets that we examined in the original FieldScreen publication, and re-ran them with differing numbers of conformations for the actives and decoys: 5, 10, 20, 50, 100 and 200. The results are shown in Figure 1, firstly as the ROC AUC value, and then as the cluster-corrected ROC AUC value where the results are assed in terms of number of chemotypes retrieved rather than the number of actives.


Performance of Blaze on DUD data set - ROC AUC
Performance of Blaze on DUD data set - ROC AUC-CA
Figure 1 – Blaze performance on a DUD subset

Well, this is pretty conclusive. The VS performance doesn’t show any consistent trends with numbers of conformations: a few searches get better, but a few get worse. Overall the average performance is much the same (or possibly slightly worse) with 200 confs as it is with 5 confs. Looks like Cappel et al. were right, after all.

Or were they? There are several big issues with using retrospective data sets such as DUD to do this sort of analysis. This first is that the actives and inactives come from different chemical spaces. The actives are chosen from J. Med. Chem. papers and tend to be highly-optimised compounds with a regrettable lack of structural diversity, and the inactives are chosen from some chemical space (ZINC, in the case of DUD) with some effort put in to match the properties of the inactives to the actives (number of heavy atoms, etc.). However, it is very difficult to ensure that the decoys actually match the actives (and harder still to define properly what ‘matching’ should actually mean).

The PDE5 data set illustrates the problem. From the graphs, it can be seen that the ROC AUC declines markedly as we increase the number of conformations. The reason becomes apparent when we examine the distribution of rotatable bonds in the data set (Figure 2). The orange bars are the decoys, while the blue bars are the actives. As can be seen, the actives are significantly more rigid than the decoys. As a result, when you increase the number of conformations, the scores for the decoys are more likely to increase (as you find conformers that better match the query) than the scores for the more rigid actives.


Rotatable bone description for PDE5
Figure 2 – Rotatable bond distribution for PDE5

The other data set with decreasing performance with increasing conformer count is HIV RT. Here the distribution of rotatable bonds is much more similar when comparing actives and decoys. However, let’s examine more closely the structures of the first 12 hits in the 5-conformation data set (Figure 3). The first hit is the query molecule itself, and 8 of the top 12 hits are very structurally similar to it. Although some of these have a high rotatable bond count (e.g. the third hit), matching just the core pyrimidinedione and the pendant benzyl is enough to give a high similarity score; the remaining flexible portions of the molecule are not very large and hence only have a modest contribution to the score. As a result, a high score is obtained for the third hit even though the flexible glycol chain is not in the optimal conformation.


HIVRT top-ranked hits
Figure 3 – HIVRT top-ranked hits

The remaining 4 of the top 12 hits all have very little flexibility (1-2 rotatable bonds), and hence are just as easy to find in the 5-conformer data set as the 200-conformer one. The apparent successful performance of the HIV RT search with only 5 conformations is thus misleading: we find the molecules from the same series as the query and the rigid actives earlier than we do in the 200-conformer data set, and these results mask the better performance for the more ‘interesting’ actives in the latter.

The other data sets all tell a similar story. For example, consider the EGFR data set, on which superb performance is obtained even with only 5 conformations. Looking at the actives, they all seem to consist of a phenylamino-substituted bi- or tricyclic heterocycle (Figure 4). In fact, of the 396 actives, 329 match the SMARTS pattern ‘c-[NH]-c’. Even a minimal conformational sampling is likely to produce a conformation where the two aromatic rings are in roughly the correct orientation, and as a result superb screening results can be obtained even with the 5-conf data set. However, this is really an artefact of the lack of chemotype diversity rather than a true reflection of the performance.


EGFR top-ranked hits
Figure 4 -EGFR top-ranked hits

So, what does this experiment (and the analogous ones carried out by Cappel et al.) tell us? Unfortunately, they tell us more about how difficult it is to set up a retrospective virtual screening test set than anything about the optimal parameters to use. The lack of diversity in actives gleaned from the literature, together with the difficulty in property-matching decoys to the actives, mean that the results from this sort of analysis should be taken with a very large pinch of salt indeed. It should be pointed out that Cappel et al. used all 39 of the DUD data sets in their analysis: in our FieldScreen paper we showed that only 15 of these have enough chemotype diversity to be useful for testing LBVS systems, and as we have seen here even those 15 aren’t diverse enough to make robust conclusions.

In their paper Cappel et al. state that they obtained almost identical enrichment values whether they used the correct bioactive conformation or the lowest energy conformation of each query molecule (as determined by their force field) . If the enrichment performance does not depend on the query conformation, then by definition either your method isn’t a 3D method and is doing a 2D search in disguise, or the validation data you are using is biased. The low diversity of actives in the majority of the DUD data set argues for the latter.

I believe that the correct way to assess the relative and absolute performance of VS methods is to test them on actual screening data, not on artificial test sets. Big pharma companies have decades of HTS data waiting to be analysed: the hits are from the same collection as the decoys, they are weak binders rather than optimised compounds, and there are lots of them. Unfortunately, this data is not available to independent researchers. Researchers in big pharma, who do have access to this data, tend for some reason to publish VS performance analyses on literature data sets instead. If any big pharma researchers reading this post fancy working with us on examining the issue of conformation space sampling for virtual screening on their HTS data, then please do let us know.

Bioisosterism – harder than you might think

A common concept in medicinal chemistry is the bioisostere. The exact definition of a bioisostere is rather fuzzy, but the Wikipedia entry is as good as any: ‘bioisosteres are chemical substituents or groups with similar physical or chemical properties which produce broadly similar biological properties to another chemical compound.’ Bioisosteric replacements come in two general classes: core replacements, where the centre of a molecule is changed, creating a new chemical series; and leaf replacements where a replacement is sought for part of the molecule on the periphery, keeping it in the same series. Core replacements can get you out of a difficult ADMET or IP situation or can be a useful stepping stone to developing a backup series. Leaf replacements are more common as part of the day-to-day work of lead optimization – having discovered a substituent that works, the obvious thing to do is to try more things like it!

There are many software products available to search for bioisosteric replacements, both commercial and freely-available. The proliferation of methods is largely because searching for bioisosteres looks so easy. Look at the piece you want to replace, search a database for something largely the same size and geometry, and present the results. Simple, right? However, if you delve into it, the whole process is much more complex than it appears …

The first issue for a bioisostere replacement method involves deciding whether a fragment is physically the right size – are the connection points in the right place and at the right angle? Again this seems a trivially easy question to address until we consider that this is a 3D question. You need to consider the 3D geometry of the fragment – what conformation is it in? We chose to use a mixed approach to this: we provide both fragments derived from the CSD where the conformation comes directly from the small molecule crystal structure and large databases of fragments where we provide a small distribution of low energy conformations. As a result, you can just search experimental small molecule crystal conformations if you wish, but you can augment that search with a much larger and richer conformation database if you so desire.

The second criterion is pharmacophoric: a bioisosteric replacement must have ‘broadly similar biological properties’ and hence its interactions with the protein must be similar to the original molecule. Again, there’s a subtlety here. If I’m looking for a replacement for a triazolothiazole, then when I’m presented with a candidate fragment the obvious thing to do is ask “how similar is it to triazolothiazole?” This approach is flawed because the properties of the candidate fragment (and the initial triazolothiazole!) depend on their environment i.e. the rest of the molecule. This is especially true if you are interested in molecular electrostatics, as the electrostatic potential of a molecule is a global property that cannot generally be piecewise decomposed. That is why Spark performs all scoring in product space, not in fragment space (Figure 1). However, even if you are using a more crude method such as shape or pharmacophore similarity, treating the fragment in isolation can be very misleading.


Figure 1: Merging a new fragment can (subtly) change the electrostatics of the rest of the molecule. Red=positive, Blue=negative regions.

Working in product space brings advantages in steric considerations as well as electronic. Even though a particular fragment might have a beautiful geometric fit, and match the original core very well in terms of shape/electrostatics/color/pharmacophores/whatever, you still need to assess whether the fragment is compatible with the conformation of the original molecule. That assessment cannot be done in fragment space! In Spark we handle this in two ways. Firstly, all product molecules are minimized prior to scoring. If, due to steric or electronic effects, the candidate fragment just is not compatible with the desired conformation of the original molecule, then this minimization will distort the invariant parts and lead to a very low similarity score. Fragments which cause steric clashes or lone pair-lone pair repulsions are thus automatically filtered out (Figure 2). Secondly, we follow this up with an explicit assessment of strain energy around the newly-formed bonds, so the user can immediately see whether there are any causes for concern.

Toluyl is similar to phenyl except when it causes a steric clash
Figure 2: Toluyl is similar to phenyl, except when it causes a steric clash. Whether it does or not depends on what R is!

Another thing that can be drastically affected by the environment is the hybridization and charge state of the fragment (and the rest of the molecule). The main culprit here is nitrogen. Is an amine basic or not? Is the nitrogen pyramidal or trigonal planar? Both these questions depend strongly on what the nitrogen is attached to and hence you cannot assess the suitability of a fragment without reference to the chemical environment that you are going to place it in. These questions turn out to be surprisingly difficult to handle in a completely robust way: there is a large amount of code in Spark which assesses the local chemical environment around the newly-formed bonds and determines whether any hybridization or formal charge changes are required. It turns out you cannot just recharge the product molecule, as in some cases the user may have assigned particular charge states to parts of the molecule (based on experimental knowledge of the charge state when bound to the protein, for example) and you don’t want to undo that.

The final complication to performing the scoring properly (i.e. in product space) is accounting for the available degrees of freedom. If you are doing a core replacement, then the positon of the core is generally completely determined by the attachment points. However, if you are doing a leaf replacement, then the question arises as to what rotamer to choose around the attachment bond. The only solution is to perform a limited conformation search around that bond and score each conformer, which has the potential to get very slow although there are some computational tricks you can do to reduce the search space. It’s not just leaf groups that have to be searched: replacing the centre of a molecule might require a rotation scan if there are two attachment points which happen to be collinear.

So, our nice and simple bioisosterism calculation has become rather more complicated. Rather than just see if a fragment fits into the hole in the original molecule, we need to merge it in properly, assess any hybridization or formal charge state changes that are required, minimize the resulting molecule, perform a rotation scan around any underspecified degrees of freedom, recompute the electrostatic potentials, align to the original molecule and calculate the steric and electrostatic similarities. Only then can we decide whether the fragment is any good or not! Luckily, Spark does all of that for you, so as a user you can concentrate on the results, rather than on the complicated calculations required to produce those results.

Try it for yourself – download a free evaluation of Spark.

Examining the diversity of large collections of building blocks in 3D

Abstract

2D fingerprint-based Tanimoto distances are widely used for clustering due the overall good balance between speed and effectiveness. However, there are significant limitations in the ability of a 2D fingerprint-based method to capture the biological similarity between molecules, especially when conformationally flexible structures are involved. Structures which appear to largely differ in functional group decoration may give rise to quite similar
steric/electrostatic properties, which are what actually determine their recognition by biological macromolecules.

In BioBlocks’ Comprehensive Fragment Library (CFL) program, we were confronted with clustering a very large collection of scaffolds generated from first principles. Due to the largely unprecedented structures in the set and our design aim to populate the 3D ‘world’, using the best 3D metrics was critical. The structural diversity of the starting collection of about 800K heterocyclic scaffolds with variable functional group decoration was not adequately captured by 2D ECFP4 fingerprint Tanimoto distances, as shown by the rather flat distribution of 2D similarity values across the set, and by their lack of correlation with the 3D similarity metrics.

The initial step of any clustering procedure is the computation of an upper triangular matrix holding similarity values between all pairs of compounds. This step becomes computationally demanding when using 3D methods, since an optimum alignment between the molecules needs be found taking into account multiple conformers.

The presentation covers the methodological and technical solutions adopted to enable 3D clustering of such a large set of compounds. Selected examples will be presented to compare the quality and the informative content of 3D vs 2D clusters.

Presentation

See presentation ‘Examining the diversity of large collections of building blocks in 3D‘ given at 250th ACS national meeting.

Is it worth making? Assessing the information content of new structures

Abstract

We have recently presented a method of summarizing the information obtained from 3D activity cliff analysis: examination of all pairs of molecules can distinguish between apparent cliffs that are outliers, or due to measurement error, and those which consistently point to particular electrostatic and steric features having a large impact on activity. To do this it has proved essential to allow for alignment noise: no 3D alignment technique is perfect, so we apply a Bayesian analysis to correct for potential misalignments and for the case where a molecule is aligned correctly except for a flexible substituent whose conformation is under-constrained. We use the recent AZ/CCDC alignment validation data set to determine valid estimates for the Bayesian priors.

As an extension of this technique, it is possible to mine the data for a simple picture of explored pharmacophoric space, corrected for the conformational and alignment flexibility of each molecule. This provides an invaluable picture to the chemist of which parts of property space around a molecule have been adequately explored. When considering a new molecule for synthesis, it is possible to compute the amount that this would increase the explored pharmacophoric space and hence present an ‘information content’ score for the new molecule: if we made and tested this new molecule, how much would it actually increase the structure activity relationship (SAR) information content of the data set?

The combination of this, with the activity cliff summary data, allows a simple qualitative evaluation of the SAR of a data set in 3D, alongside guidance on which parts of pharmacophoric space have been mined out and which remain underexplored. We present the application of these techniques to several literature data sets.

Presentation

See presentation ‘Is it worth making? Assessing the information content of new structures‘ given at the 250th ACS National Meeting.